![]() position dependent prediction combinations in video encoding
专利摘要:
A video encoder can generate a predictor block using an intra prediction mode. As part of the generation of the predictor block, the video encoder can, for each respective sample in a set of samples in the predictor block, determine, based on an initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample. In addition, the video encoder can determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample. The video encoder can also determine a primary value for the respective sample. The video encoder can then determine a secondary value for the respective sample based on the first weight, second weight, and the primary value. 公开号:BR112020006568A2 申请号:R112020006568-4 申请日:2018-10-09 公开日:2020-10-06 发明作者:Xin Zhao;Vadim SEREGIN;Amir Said;Marta Karczewicz;Kai Zhang;Vijayaraghavan Thirumalai 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
[0001] [0001] This application claims the benefit of US Provisional Patent Application 62 / 570,019, filed on October 9, 2017 and claims priority to US Application 16 / 154,261, filed on October 8, 2018, the total content of which is incorporated by reference. TECHNICAL FIELD [0002] [0002] This disclosure refers to video encoding. FUNDAMENTALS [0003] [0003] Digital video resources can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptops or desktop computers, tablet computers , e-book readers, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, satellite or cellular radio phones, smartphones, teleconferencing devices, video streaming devices and similar. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG-4, part 10, Video encoding Advanced Video (AVC), ITU-T H.265, high efficiency video coding standard (HEVC) and extensions of such standards. Video devices can transmit, receive, [0004] [0004] Video compression techniques can perform spatial prediction (intra-image) and / or temporal prediction (inter-image) to reduce or remove the redundancy inherent in video sequences. For block-based video encoding, a video slice (for example, a video frame or part of a video frame) can be partitioned into video blocks, such as encoding tree blocks and encoding blocks. Spatial or temporal prediction results in a predictive block for a block to be coded. Residual data represents pixel differences between the original block to be encoded and the predictive block. For additional compression, residual data can be transformed from the pixel domain to a transformation domain, resulting in residual transformation coefficients, which can be quantized. SUMMARY [0005] [0005] In general, this disclosure describes techniques related to intra prediction and coding intra mode. The techniques of this disclosure can be used in the context of advanced video codecs, such as HEVC extensions or the next generation of video encoding standards. [0006] [0006] In one example, this disclosure describes a method of decoding video data, the method comprising: generating a predictor block using an intra prediction mode, in which generating the predictor block comprises: [0007] [0007] In another example, this disclosure describes a method of encoding video data, the method comprising: generating a predictor block using an intra prediction mode, in which generating the predictor block comprises: determining an initial value of a first weight ; determining an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determining a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determining a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and generate residual data based on the predictor block and an encoding block of the video data. [0008] [0008] In another example, this disclosure describes a device for decoding video data, the device comprising: one or more storage media configured to store video data; and one or more processors configured to: generate a predictor block using an intra prediction mode, in which the one or more processors are configured so that, as part of the generation of the predictor block, the one or more processors: determine an initial value a first weight; determine an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determine a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and reconstruct, based on the predictor block and residual data, a decoded block of video data. [0009] [0009] In another example, this disclosure describes a device for encoding video data, the device comprising: one or more storage media configured to store video data; and one or more processors configured to: generate a predictor block using an intra prediction mode, in which the one or more processors are configured so that, as part of the generation of the predictor block, the one or more processors: determine an initial value a first weight; determines an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determine a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and generate residual data based on the predictor block and an encoding block of the video data. [0010] [0010] In another example, this disclosure describes a device for decoding video data, the device comprising: means for storing video data; and means for generating a predictor block using an intra prediction mode, wherein the means for generating the predictor block comprises: means for determining an initial value of a first weight; means for determining an initial value of a second weight; for each respective sample in a set of samples in the predictor block: means for determining, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; means for determining, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; means for determining a value of a third weight for the respective sample; means for determining a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; means for determining a primary value for the respective sample according to the intra prediction mode; and means for determining a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample , (iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the value primary for the respective sample, and (v) a displacement value; and means for reconstructing, based on the predictor block and residual data, a decoded block of video data. [0011] [0011] In another example, this disclosure describes a device for encoding video data, the device comprising: means for storing video data; and means for generating a predictor block using an intra prediction mode, wherein the means for generating the predictor block comprises: means for determining an initial value of a first weight; means for determining an initial value of a second weight; for each respective sample in a set of samples in the predictor block: means for determining, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; means for determining, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; means for determining a value of a third weight for the respective sample; means for determining a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; means for determining a primary value for the respective sample according to the intra prediction mode; and means for determining a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample , (iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the value primary for the respective sample, and (v) a displacement value; and means for generating residual data based on the predictor block and an encoding block of the video data. [0012] [0012] In another example, this disclosure describes a computer-readable storage medium having instructions stored on it that, when executed, make one or more processors: generate a predictor block using an intra prediction mode, where the one or more processors are configured so that, as part of the generation of the predictor block, the one or more processors: determine an initial value of a first weight; determine an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determines a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and reconstruct, based on the predictor block and residual data, a decoded block of video data. [0013] [0013] In another example, this disclosure describes a computer-readable storage medium having instructions stored on it that, when executed, make one or more processors: generate a predictor block using an intra prediction mode, in which the one or more processors are configured such that, as part of the generation of the predictor block, the one or more processors: [0014] [0014] Details of one or more aspects of the disclosure are set out in the attached drawings and in the description below. Other characteristics, objects and advantages of the techniques described in this disclosure will be evident from the description, drawings and claims. BRIEF DESCRIPTION OF THE DRAWINGS [0015] [0015] Figure 1 is a block diagram that illustrates an example of a video encoding and decoding system that can use one or more techniques described in this disclosure. [0016] [0016] Figure 2 illustrates an example of intra prediction for a 16 x 16 block. [0017] [0017] Figure 3 illustrates an example of intra prediction modes. [0018] [0018] Figure 4 is a conceptual diagram that illustrates an example of a planar mode as defined in HEVC. [0019] [0019] Figure 5 is a conceptual diagram that illustrates an example of an intra-angular prediction mode. [0020] [0020] Figure 6A is a conceptual diagram that illustrates an example of data available for combining position-dependent prediction for a 4 x 4 pixel block. [0021] [0021] Figure 6B is a conceptual diagram that illustrates an example of data available for combining position-dependent prediction for a 4 x 4 pixel block. [0022] [0022] Figure 7A is a block diagram that illustrates an example using a planar / DC mode with a weighting applied to generate a prediction sample (0.0), according to a technique of this disclosure. [0023] [0023] Figure 7B is a block diagram illustrating an example using a planar / DC mode with a weighting applied to generate a prediction sample (1, O), according to a technique of this disclosure. [0024] [0024] Figure 7C is a block diagram illustrating an example using a planar / DC mode with a weighting applied to generate a prediction sample (0.1), according to a technique of this disclosure. [0025] [0025] Figure 7D is a block diagram illustrating an example using a planar / DC mode with a weighting applied to generate a prediction sample (1, 1), according to a technique of this disclosure. [0026] [0026] Figure 8 is a block diagram that illustrates an example of a video encoder that can implement one or more techniques described in this disclosure. [0027] [0027] Figure 9 is a block diagram that illustrates an example of a video decoder that can implement one or more techniques described in this disclosure. [0028] [0028] Figure 10 is a flow chart that illustrates an example of operation of a video encoder according to a technique of this disclosure. [0029] [0029] Figure 11 is a flow chart that illustrates an example of operation of a video decoder according to a technique of this disclosure. DETAILED DESCRIPTION [0030] [0030] A video encoder (for example, a video encoder or a video decoder) can use intra prediction to generate a predictor block for a current block of a current image. In general, when using intra prediction to generate a predictor block, the video encoder determines a set of reference samples in a column to the left of the current block in the current image and / or in a line above the current block in the current image. The video encoder can then use the reference samples to determine sample values in the predictor block. [0031] [0031] In high-efficiency video encoding (HEVC) and other video encoding standards, the video encoder performs intra-reference smoothing. When the video encoder performs intra-reference smoothing, the video encoder applies a filter to the reference samples before using the reference samples to determine predicted sample values in the predictor block. For example, the video encoder can apply a bilinear 2-lead filter, a 3-lead filter (1,2,1) / 4, or a mode-dependent smoothing filter to the reference samples. In the filter description above, ‘/ 4’ denotes normalization by dividing the results by 4. Similarly, performing intra-reference smoothing improves prediction accuracy, especially when the current block represents a smoothly varying gradient. [0032] [0032] Although intra-reference smoothing can improve prediction accuracy in many situations, there are other situations in which the use of unfiltered reference samples may be beneficial. The combination of position dependent prediction (PDPC) is a scheme that was developed to solve these problems and improve intra prediction. In the PDPC scheme, a video encoder determines a value from a predictor block sample based on filtered reference samples, unfiltered reference samples, and the position of the predictor sample within the predictor block. The use of the PDPC scheme can be associated with gains in coding efficiency. For example, the same amount of video data can be encoded using few bits. [0033] [0033] Despite the coding efficiency gains associated with using the PDPC scheme described above, there are several disadvantages. For example, the PDPC scheme is limited to planar mode only to control encoder complexity, which can limit the coding gain contributed by the PDPC scheme. [0034] [0034] This disclosure describes techniques that can improve the PDPC scheme described above, resulting in a simplified PDPC scheme. For example, according to an example of the technique of this disclosure, a video encoder (for example, a video encoder or a video decoder) generates a predictor block using an intra prediction mode. As part of the generation of the predictor block, the video encoder can determine an initial value of a first weight and determine an initial value of a second weight. [0035] [0035] Figure 1 is a block diagram illustrating an example of a video encoding and decoding system 10 that can use the techniques of this disclosure. As shown in Figure 1, system 10 includes a source device 12 that provides encoded video data to be decoded later by a destination device 14. In particular, source device 12 provides encoded video data to the destination 14 via a computer-readable medium 16. The source device 12 and the destination device 14 can include any of a wide range of devices or devices, including desktop computers, notebook computers (ie laptop), tablet computers, set-top boxes, telephone devices as so called “smart” phones, tablet computers, televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, or the like. In some cases, the source device 12 and the destination device 14 are equipped for wireless communication. Thus, the source device 12 and the destination device 14 can be wireless communication devices. The techniques described in this disclosure can be applied to wired / wireless applications. The source device 12 is an example of a video encoding device (i.e., a device for encoding video data). The target device 14 is an example of a video decoding device (i.e., a device for decoding video data). [0036] [0036] System 10 illustrated in Figure 1 is just an example. Techniques for processing video data can be performed by any digital video encoding and / or decoding device. In some examples, the techniques can be performed by a video encoder / decoder, in the same way called a "CODEC". The source device 12 and the target device 14 are examples of such encoding devices in which the source device 12 generates encoded video data for transmission to the target device 14. In some examples, the source device 12 and the device destination 14 operate in a substantially symmetrical manner so that each of the source device 12 and the destination device 14 includes video encoding and decoding components. Consequently, system 10 can support unidirectional or bidirectional video transmission between the source device 12 and the destination device 14, for example, for video streaming, video playback, video dispersion, or video telephony. [0037] [0037] In the example in Figure 1, the source device 12 includes a video source 18, storage media 19 configured to store video data, a video encoder 20, and an output interface 22. The destination device 14 includes an input interface 26, storage media 28 configured to store encoded video data, a video decoder 30, and display device [0038] [0038] Video source 18 is a source of video data. The video data can include a series of images. Video source 18 may include a video capture device, such as a video camera, a video file containing previously captured video, and / or a video feed interface for receiving video data from a video content provider. . In some instances, video source 18 generates video data based on computer graphics, or a combination of live video, archived video, and computer generated video. Storage media 19 can be configured to store video data. In each case, the computer generated video, captured, pre-captured can be encoded by the video encoder [0039] [0039] Output interface 22 can output encoded video information to a computer-readable medium 16. Output interface 22 can include various types of components or devices. For example, output interface 22 may include a wireless transmitter, a modem, a wired network component (for example, an Ethernet card), or another physical component. In examples where output interface 22 includes a wireless transmitter, output interface 22 can be configured to transmit data, such as encoded video data, modulated according to a cellular communication standard, such as 4G, LTE 4G, LTE Advanced, 5G, and the like. In some examples where output interface 22 includes a wireless transmitter, output interface 22 can be configured to transmit data, such as encoded video data, modulated according to other wired standards, such as an IEEE 802.11 specification, a specification IEEE 802.15 (for example, ZigBeeTM), a BluetoothTM standard, and the like. In some examples, the output interface circuitry 22 is integrated into the video encoder circuitry 20 and / or other source device components [0040] [0040] The target device 14 can receive encoded video data to be decoded via computer-readable medium 16. Computer-readable medium 16 can include any type of medium or device capable of moving the encoded video data from the source 12 for target device [0041] [0041] In some examples, output interface 22 may output data, such as encoded video data, to an intermediate device, such as a storage device. Likewise, the input interface 26 of the target device 14 can receive encrypted data from the intermediate device. The intermediate device can include any of a variety of distributed or locally accessed data storage media such as a hard drive, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other media. digital storage suitable for storing encoded video data. In some examples, the intermediate device corresponds to a file server. Example file servers include web servers, FTP servers, network-attached storage devices (NAS), or local disk drives. [0042] [0042] The target device 14 can access the encoded video data through any standard data connection, including an Internet connection. This can include a wireless channel (for example, a Wi-Fi connection), a wired connection (for example, DSL, cable modem, etc.) or a combination of both that is suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from the storage device can be a streaming transmission, a download transmission or a combination thereof. [0043] [0043] Computer-readable medium 16 may include transient media, such as wireless transmission or wired network transmission, or storage media (ie, non-transitory storage media), such as hard disk, flash drive, CD, digital video disc, Blu-ray disc or other computer-readable media. In some examples, a network server (not shown) can receive encoded video data from the source device 12 and provide the encoded video data to the destination device 14, for example, via network transmission. Likewise, a computing device in a medium production facility, such as a disk recording facility, can receive encoded video data from the source device 12 and produce a disc containing the encoded video data. Therefore, the computer-readable medium 16 can be understood as including one or more computer-readable medium in various ways, in various examples. [0044] [0044] The input interface 26 of the target device 14 receives data from the computer-readable medium [0045] [0045] The storage media 28 can be configured to store encoded video data, such as encoded video data (for example, a bit stream) received by the input interface 26. The display device 32 displays the decoded video data for a user. The display device 32 can include any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode display (OLED) ), or another type of display device. [0046] [0046] Video encoder 20 and video decoder 30 each can be implemented as any of a variety of suitable circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field programmable port arrays (FPGAs), discrete logic, software, hardware, firmware or any combination thereof. When the techniques are partially implemented in software, a device can store instructions for the software in an appropriate non-transitory computer-readable medium and can execute instructions in hardware using one or more processors to perform the techniques of this disclosure. Each of the video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of a combined encoder / decoder (CODEC) in a respective device. [0047] [0047] In some examples, video encoder 20 and video decoder 30 encode and decode video data according to a standard video encoding or specification. For example, video encoder 20 and video decoder 30 can encode and decode video data according to ITUT H.261, ISO / IEC MPEG-1 Visual, ITU-T H.262 or ISO / IEC MPEG-2 Visual, ITU-T H.263, ISO / IEC MPEG-4 Visual and ITU-T H.264 (also known as ISO / IEC MPEG-4 AVC), including its Scalable Video Encoding Extensions (SVC) and video encoding multi-view video (MVC) or other video encoding standard or specification. In some examples, video encoder 20 and video decoder 30 encode and decode video data according to the High Efficiency Video Encoding (HEVC), which as known as ITU-T H.265, its encoding extensions of screen content and reach, its 3D video encoding extension (3D-HEVC), its multi-view extension (MV-HEVC) or its scalable extension (SHVC). A Joint Video Exploration Team (JVET) is currently developing the versatile video encoding standard (VVC) based on the joint exploration model. [0048] [0048] This disclosure can generally refer to “signaling” certain information, as elements of syntax. The term "signaling" can generally refer to the communication of elements of syntax and / or other data used to decode the encoded video data. This communication can occur in real time or near real time. Alternatively, this communication can occur over a period of time, as it can when storing syntax elements in a computer-readable storage medium in a bit stream at the time of encoding, which can be retrieved by a decoding device at any time after being stored in this medium. [0049] [0049] The video data includes a series of photos. Photos can also be called "frames". An image can include one or more sample matrices. Each respective sample matrix of an image includes a two-dimensional sample matrix for a respective color component. For example, an image can include a two-dimensional matrix of luma samples, a two-dimensional matrix of chroma samples Cb and a two-dimensional matrix of chroma samples Cr. In other cases, [0050] [0050] As part of encoding video data, video encoder 20 can encode images of video data. In other words, the video encoder 20 can generate encoded representations of the images of the video data. This disclosure may refer to an encoded representation of an image as an "encoded image" or an "encoded image". [0051] [0051] As part of the generation of an encoded representation of an image, video encoder 20 encodes sample blocks in the sample matrices of the image. A block is a two-dimensional array of data, like a two-dimensional array of samples. The video encoder 20 may include, in a bit stream, an encoded representation of a block. [0052] [0052] In some examples, to encode a block of the image, the video encoder 20 performs intra prediction or inter prediction to generate one or more predictor blocks for the block. In addition, video encoder 20 can generate residual data for the block. The residual block includes residual samples. Each residual sample can indicate a difference between a sample from one of the generated predictive blocks and a sample from the corresponding block. The video encoder 20 can apply a transformation to blocks of residual samples to generate transformation coefficients. In addition, the video encoder 20 can quantify the transformation coefficients. In some examples, video encoder 20 may generate one or more elements of syntax to represent a transformation coefficient. The video encoder 20 can entropy encode one or more of the syntax elements representing the transformation coefficient. [0053] [0053] In some video encoding specifications, to generate an encoded representation of an image, video encoder 20 partitions each sample matrix of the image into encoding tree blocks (CTBs) and encodes the CTBs. A CTB is an N x N block of samples in a sample matrix of an image. For example, a CTB can vary in size from 16 x 16 to 64 x 64. [0054] [0054] An encoding tree unit (CTU) of an image includes one or more placed CTBs and syntax structures used to encode samples from one or more placed CTBs. For example, each CTU can include a single sample CTB of an image, two corresponding chroma sample CTBs, and syntax structures used to encode the CTB samples. In monochrome images or images with three separate color planes, a CTU can include a single CTB and syntax structures used to encode the CTB samples. A CTU can also be called a “tree block” or “largest coding unit” (LCU). In this disclosure, a “syntax structure” can be defined as zero or more elements of syntax present together in a bit stream in a specified order. In some codecs, an encoded image is an encoded representation containing all CTUs in the image. [0055] [0055] To encode a CTU of an image, video encoder 20 can partition CTU CTBs into one or more encoding blocks. A coding block is an N x N block of samples. In some codecs, to encode a CTU of an image, video encoder 20 can partition the CTU encoding tree blocks to partition CTU CTBs into encoding blocks according to a tree structure, hence the name “units encoding tree ”. [0056] [0056] An encoding unit (CU) includes one or more encoding blocks and syntax structures used to encode samples from one or more encoding blocks. For example, a UC can include a luma sample coding block and two corresponding chroma sample coding blocks of an image that has a luma sample matrix, a Cb sample matrix and a Cr sample matrix and structures syntax used to encode the samples of the encoding blocks. In monochrome images or images with three separate color planes, a CU can include a single coding block and syntax structures used to encode the samples in the coding block. [0057] [0057] In addition, video encoder 20 can encode UCs of an image of the video data. In some codecs, as part of encoding a CU, video encoder 20 can partition a CU encoding block into one or more predictor blocks. A predictor block is a rectangular block (that is, square or non-square) of samples to which the same prediction is applied. A CU's prediction unit (PU) can include one or more predictive blocks of a CU and syntax structures used to predict the one or more predictor blocks. For example, a PU can include a luma sample predictor block, two corresponding chroma sample predictor blocks, and syntax structures used to predict the predictor blocks. In monochrome images or images with three separate color planes, a PU can include a single predictor block and syntax structures used to predict the predictor block. In other codecs, video encoder 20 does not partition a CU encoding block into predictor blocks. Instead, the prediction occurs at the CU level. Thus, the CU coding block can be synonymous with a CU predictor block. [0058] [0058] The video encoder 20 can generate a predictor block (for example, a luma predictor block, Cb and Cr) for a predictor block (for example, luma predictor block, Cb and Cr) of a CU. Video encoder 20 can use intra prediction or inter prediction to generate a predictor block. If the video encoder 20 uses internal prediction to generate a predictor block, the video encoder 20 can generate the predictor block based on decoded samples of the image that includes the CU. If video encoder 20 uses inter-prediction to generate a predictor block from a current image, video encoder 20 can generate the predictor block based on decoded samples from a reference image (that is, a different image from the current image) . [0059] [0059] A video encoder, such as video encoder 20 or video decoder 30, can perform intra prediction using an intra prediction mode selected from available intra prediction modes. Intra prediction modes can include directional intra prediction modes, which can also be termed as intra prediction directions. Different directional intra prediction modes correspond to different angles. In some examples, to determine a value from a current sample of a predictor block using a directional intra prediction mode, the video encoder can determine a point where a line passing through the current sample at the angle corresponding to the directional intra prediction mode crosses a set of edge samples. Edge samples can include samples in a column immediately to the left of the predictor block and samples in a line immediately above the predictor block. If the point is between two of the edge samples, the video encoder can interpolate or otherwise determine a value corresponding to the point. If the dot matches a single edge sample, the video encoder can determine that the dot value is equal to the edge sample. The video encoder can set the current sample value of the predictor block equal to the determined point value. [0060] [0060] The video encoder 20 can generate one or more residual blocks for the CU. For example, video encoder 20 can generate a separate residue for each color component (for example, luma, Cb and Cr). Each sample in a residual UC block for a color component indicates a difference between a sample in a CU predictor block for the color component and a corresponding sample in the CU coding block for the color component. [0061] [0061] In some video encoding standards, video encoder 20 can decompose the residual blocks of a CU into one or more transformation blocks. For example, video encoder 20 can use the partitioning of four trees to decompose the residual blocks of a CU into one or more transformation blocks. A transformation block is a rectangular sample block (for example, square or non-square) to which the same transformation is applied. A transformation unit (TU) of a CU can include one or more transformation blocks. The video encoder 20 can apply one or more transformations to a transformation block of a TU to generate a coefficient block for the TU. A coefficient block is a two-dimensional matrix of transformation coefficients. The video encoder 20 can generate elements of syntax indicating some or all potentially quantized transformation coefficients. The video encoder 20 can encode by entropy (for example, using context-adaptive binary arithmetic encoding (CABAC)) one or more of the syntax elements that indicate a quantized transformation coefficient. [0062] [0062] Video encoder 20 can output a bit stream that includes encoded video data. In other words, the video encoder 20 can output a bit stream that includes an encoded representation of video data. The encoded representation of the video data can include an encoded image representation of the video data. For example, the bit stream may include a bit stream that forms a representation of encoded images of the video data and associated data. In some examples, a representation of an encoded image may include an encoded representation of blocks of the image. [0063] [0063] Video decoder 30 can receive a bit stream generated by video encoder 20. As noted above, the bit stream can include an encoded representation of video data. The video decoder 30 can decode the bit stream to reconstruct images from the video data. As part of the bit stream decoding, the video decoder 30 obtains syntax elements from the bit stream. The video decoder 30 reconstructs images of the video data based, at least in part, on the syntax elements obtained from the bit stream. The process for reconstructing images from the video data can generally be reciprocal to the process performed by the video encoder 20 to encode the images. [0064] [0064] For example, as part of decoding an image of the video data, the video decoder 30 can use inter or intra prediction prediction to generate predictive blocks for CUs of the image. In addition, the video decoder 30 can determine transformation coefficients based on syntax elements obtained from the bit stream. In some examples, the video decoder 30 inversely quantizes the determined transformation coefficients. Inverse quantization maps a quantized value to a reconstructed value. For example, video decoder 30 can inversely quantify a value by determining the value multiplied by a quantization step size. In addition, the video decoder 30 can apply a reverse transformation to the transformation coefficients determined to determine residual sample values. The video decoder 30 can reconstruct an image block based on the residual samples and the corresponding samples from the generated predictor blocks. For example, video decoder 30 can add residual samples to the corresponding samples of the generated predictor blocks to determine reconstructed samples of the block. [0065] [0065] More specifically, the video decoder 30 can use inter prediction or intra prediction to generate one or more predictor blocks for each PU of a current CU. In addition, video decoder 30 can inversely quantify TU coefficient blocks of the current CU. The video decoder 30 can perform reverse transformations on the coefficient blocks to reconstruct the transformation blocks of the current CU's TUs. The video decoder 30 can reconstruct a current CU encoding block based on samples from the predictive blocks of the current CU PUs and residual samples from the transformation blocks of the current CU TUs. In some instances, the video decoder 30 can reconstruct the encoding blocks of the current CU, adding samples from the PU predictor blocks of the current CU. [0066] [0066] A slice of an image can include an entire number of blocks in the image. For example, a slice of an image can include an integer number of CTUs in the image. CTUs for a slice can be ordered consecutively in a scan order, such as a scan scan order. In some codecs, a slice is defined as an integer number of CTUs contained in an independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same unit. access. In addition, in some codecs, a slice segment is defined as an integer number of CTUs ordered consecutively when scanning the juxtaposed section and contained in a single NAL unit. A scan of the juxtaposed section is a specific sequential order of CTBs that partition an image in which CTBs are requested consecutively in a CTB scan scan into a juxtaposed section, while sections juxtaposed in an image are requested consecutively in a scan scan of the CTBs. juxtaposed sections of the image. A juxtaposed section is a rectangular region of CTBs within a column of a specific juxtaposed section and a line of a specific juxtaposed section in an image. [0067] [0067] The intra prediction performs image block prediction using the spatially reconstructed image samples neighboring the block. A typical example of the intra prediction for a 16 x 16 image block is shown in Figure 2. With the intra prediction, the 16 x 16 image block (in the outlined dark square) is predicted by the neighboring reconstructed samples above and to the left ( reference samples) along a selected prediction direction (as indicated by the white arrow). In Figure 2, a block square contains a 16 x 16 block 50. In Figure 2, block 50 is predicted by the reconstructed samples above and to the left 52, 54 (that is, reference samples) along one direction. prediction selected. In Figure 2, the samples outside the black box are the reference samples. The white arrow in Figure 2 indicates the direction of the selected prediction. [0068] [0068] Figure 3 illustrates an example of intra prediction modes. In some examples, the intra prediction of a luma block includes 35 modes, including planar mode, DC mode and 33 angular modes. The 35 intra prediction modes are indexed as shown in the table below. Table 1 - Specification of intra prediction mode and associated names Intra mode Associated name prediction 0 INTRA PLANAR 1 INTRA DC [0069] [0069] Figure 4 is a conceptual diagram that illustrates an example of a planar mode. For Planar mode, which is normally the most frequently used intra prediction mode, the prediction sample is generated as shown in Figure 4. To perform planar prediction for an N x N block, for each p sample, located at (x , y), the prediction value can be calculated by applying a bilinear filter to four specific neighboring reconstructed samples, that is, reference samples. The four reference samples include the reconstructed sample in the upper right corner TR, the reconstructed sample in the lower left corner and the two reconstructed samples located in the same column (rx, -1) and row (r-1, y) of the current sample. The planar mode can be formulated as below: where and [0070] [0070] For DC mode, the predictor block is filled with the average value of the neighboring reconstructed samples. Generally, both Planar and DC modes are applied to model image regions with constant and constant variations. [0071] [0071] For angular intra prediction modes in HEVC, which include 33 different prediction directions, the intra prediction process can be described as follows. For each given angular intra prediction mode, the direction of intra prediction can be identified accordingly; for example, according to Figure 3, intra mode 18 corresponds to a pure horizontal prediction direction, and intra mode 26 corresponds to a pure vertical prediction direction. Therefore, in the example of Figure 3, intra mode 18 corresponds to horizontal mode and intra mode 26 corresponds to vertical mode. [0072] [0072] Figure 5 is a conceptual diagram that illustrates an example of an intra-angular prediction mode. Given a specific intra prediction direction, for each sample in the predictor block, its coordinate (x, y) is first projected onto the row / column of neighboring reconstructed samples along the prediction direction, as shown in an example in Figure 5. Suppose (x, y) is projected for partial position one between two neighboring reconstructed samples L and R; then, the prediction value for (x, y) can be calculated using a two-lead bi-linear interpolation filter, formulated as follows: For example, as shown in the example in Figure 5, coordinates (x, y) of a sample 70 of a predictor block 72 are projected along a specific intra prediction direction 74. To avoid floating point operations, in HEVC, the above calculation is really approximate using arithmetic integer such as: where one is an integer equal to 32 * α. [0073] [0073] In some examples, prior to intra prediction, neighboring reference samples are filtered using a 2-lead or 3-lead (1,2,1) / 4 bilinear filter, as known as intra-reference smoothing, or intra-dependent smoothing mode (MDIS). When executing the intra prediction, given the index of intra prediction mode (predModeIntra) and block size (nTbS), it is decided if a reference smoothing process is performed and which soft filter is used. The intra prediction mode index is an index that indicates an intra prediction mode. The text below the HEVC standard describes a process for filtering neighboring samples. [0074] [0074] Panusopone et al., “Planar Prediction of Uneven Weight and Restricted PDPC”, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29WG 11, 5th Meeting, Geneva, CH, January 12-20, 2017, document JVET-E0068 described an unequal weight plane (UWP) mode. In UWP mode, for a W x H block (width x height), planar prediction is performed as shown in Figure 4 with: where R and B are calculated as shown below, [0075] [0075] Although the cross-complement redundancy is significantly reduced in the YCbCr color space, there is still a correlation between the three-color components. Various techniques have been studied to improve the performance of video encoding, further reducing the correlation between the three color components. For example, in video coding in 4: 2: 0 chromatography, a technique called Linear Model (LM) prediction mode was studied during the development of the HEVC pattern. In 4: 2: 0 sampling, each of the two chroma matrices is half the height and half the width of the luma matrix. With the LM prediction mode, chroma samples are predicted based on reconstructed luma samples from the same block, using a linear model as follows: where predc (i, j) represents a prediction of chroma samples in a current block and recL (i, j) represents a reconstructed luma samples with low sampling from the current block. Parameters α and β are derived from reconstructed causal samples around the current block. Causal samples from a block are samples that occur before the block in a decoding order. If the size of the chroma block is denoted by N x N, then both i and j are within the range [0, N). [0076] [0076] Parameters α and β in equation (5) are derived by minimizing the regression error between neighboring chroma and luma samples reconstructed around the current block. [0077] [0077] In the equations above, x 1 is a reference sample of reconstructed luma with low sampling, where the color format is not 4: 4: 4 (that is, the color format is one in which a chroma sample corresponds to several luma samples), y 1 is reconstructed chroma reference samples without descending sampling and I is the number of reference samples. In other words, the video encoder can sample down reconstructed luma reference samples based on the color format that are not 4: 4: 4, but avoid sampling the reconstructed luma reference samples based on the color format which is 4: 4: 4) For a target N x N chromatography block, when the left and above causal samples are available, the total number of samples involved I is equal to 2N. When only causal samples on the left or above are available, the total number of samples involved I is equal to N. Here, N is always equal to 2 m (where m can be different for different CU sizes). Therefore, to reduce complexity, displacement operations can be used to implement division operations in equations (3) and (4). A reference sample can be considered unavailable when a true value of the reference sample is not available for use by a video decoder (for example, when the reference sample is outside a slice, image or tile boundary in relation to the block prediction). [0078] [0078] US Patent Publication No. 2017- 0094285-A1 describes a set of parameterized equations that define how to combine predictions based on filtered and unfiltered reference values and the predicted pixel position. This scheme is called the position-dependent prediction combination (PDPC) and was adopted in JEM 7. A. Said et al., “Position-dependent intra-prediction combination”, Geneva, Switzerland, ITU-T SG16 contribution, COM16 contribution -C1046, October 2015, is a standard shipping document for the subject adopted. In JEM 7, PDPC is applied only in Planar mode. [0079] [0079] Figure 6A is a conceptual diagram that illustrates an example of data available for PDPC for a 4 x 4 pixel block. Figure 6B is a conceptual diagram that illustrates an example of data available for PDPC for a 4 x 4 pixel block. With PDPC, given any two sets of pixel predictions Pr [x, y] and qs [x, y], calculated using only unfiltered and filtered (or smoothed) references, respectively, the combined predicted value of a pixel, denoted by v [x, y], is defined by: where c [x, y] is the set of combination parameters, on which the value depends on a pixel position. In Figure 6A and Figure 6B, references (ie, reference samples) are shown as shaded squares and the prediction samples are shown as white squares. [0080] [0080] A practical implementation of the PDPC uses the formula: where, are and predefined parameters that control how quickly the weights of the left, top left and top unfiltered reference samples fall in the horizontal and vertical direction, N is the size of the block, qs (HEVC) [x, y] are prediction values calculated according to the HEVC standard, for the specific mode, using filtered references and: [0081] [0081] Alternatively, the PDPC can be formulated as a 4-lead filter as below: where are the weights applied to the top, left and top left unfiltered reference samples. [0082] [0082] There are several deficiencies in JEM related to the PDPC mode. For example, in JEM 7, PDPC is applied only to planar mode, which limits the coding gain contributed by PDPC. In JEM 7, there is a limit filter applied to the DC mode and an edge filter applied to the horizontal and vertical modes. The limit filter applied in DC mode is a filter for the predictor across the block boundaries, on the left and above. The horizontal / vertical border filter is a filter that compensates for the difference between the horizontal and vertical reference pixel and the upper left pixel. These coding tools refine intra prediction in a way that overlaps PDPC in terms of algorithm design. The existence of several overlapping coding tools is not desired in terms of a clean and harmonized intra prediction design. The parameters in the PDPC are derived based on training, which can be suboptimal in terms of coding gain for different sequences. UWP mode requires split operations, which are not preferred for practical implementation. Although the UWP can be approximated alternatively using a lookup table (LUT) to avoid splits, it requires additional memory to store the LUT. To solve the problems mentioned above, this disclosure proposes the following techniques. [0083] [0083] This disclosure describes simplified PDPC techniques that can address one or more of the deficiencies described above. Figure 7A is a block diagram illustrating an example of using a planar / DC mode with a weighting applied to generate a prediction sample (0.0), according to a technique of this disclosure. Figure 7B is a block diagram illustrating an example using a planar / DC mode with a weighting applied to generate a prediction sample (1.0), according to a technique of this disclosure. Figure 7C is a block diagram illustrating an example using a planar / DC mode with a weighting applied to generate a prediction sample (0.1), according to a technique of this disclosure. Figure 7D is a block diagram illustrating an example using a planar / DC mode with a weighting applied to generate a prediction sample (1, 1), according to a technique of this disclosure. The simplified PDPC techniques disclosed here are applied to both luminance (Y) and chrominance (Cb, Cr) components, only luminance components, or only chrominance components. [0084] [0084] According to a technique of this disclosure, a video encoder can calculate a prediction sample in coordinates (x, y) as follows using equation (9), below. [0085] [0085] In another example, the video encoder can calculate a prediction sample value in the coordinates (x, y) according to equation (10), below. [0086] [0086] In another example, the video encoder can calculate a prediction sample value in the coordinates (x, y) according to equation (11), below. [0087] [0087] To avoid a LUT table for the weights, as occurred in the original PDPC project, in the simplified version, a video encoder implemented according to the techniques of this disclosure can select the initial weights wL, wT for left reference samples and upper (for example, 32, 32, as shown in Figure 7A), derive the weight wTL for the upper left reference samples as - (wL »4) - (wT >> 4) (for example, -4 as shown in Figure 7A) and then calculate the PDPC prediction for the first sample in the block by applying equation (9), (10), or (11). [0088] [0088] Moving on to the next sample in the block, the initial values of the weights wL, wT, wTL are updated based on the distance between the current sample and the limits of the block. For example, the update can be just a bypass operation, such as “» 1 ”or“ »2”, that is, a weight is divided by 2 or 4 (for example, going from (0, 0) to (1, 0) as shown in Figure 7A and Figure 7B, wL is divided by 4). Moving on to the next sample in the block, the weights are updated again and so on. In this approach, there is no need for a LUT with weights, as all weights are derived from the initial weights. [0089] [0089] Thus, according to a technique of this disclosure, the video encoder (for example, video encoder 20 or video decoder 30) can determine an initial value of a first weight (wL) [0090] [0090] In some examples, updating wL, wT and wTL may not occur after deriving a PDPC prediction for each sample. That is, in some examples, the video encoder does not determine values other than the first, second and third weights for each sample separate from the predictive block. For example, the update can occur after N samples processed (for example, when N = 2, the update occurs after every second sample). In another example, the video encoder updates wL for each other sample along the horizontal direction and wT is updated for each other sample along the vertical direction. In addition, all weights involved (for example, the first, second, third, and fourth weights) may have different update processes and distance dependencies. For example, the video encoder can update the first, second, third, and fourth weights after processing different numbers of samples; and the amount by which the video encoder changes the first, second, third, and fourth weights according to the boundary distance of the predictor block may be different. [0091] [0091] In some examples, how the update is performed and after how many samples, it may depend on the size of the block, intra mode, transformation used, and so on. For example, the video encoder can update a weight (for example, the first, second, third, and / or fourth weight) after processing each sample in a predictor block if a predictor block size is less than a size of W x L limit (for example, 32 x 32, 16 x 16, etc.) and you can update the weight after processing two samples of the predictor block if the size of the predictor block is greater than the limit size of W x L. In another example, the video encoder can update a weight (for example, the first, second, third, and / or fourth weight) after processing each sample in a predictor block if the prediction mode of the predictor block is a directional intra prediction mode and can update the weight after processing two samples from the predictor block if the prediction mode of the predictor block is Planar or DC mode. [0092] [0092] In an implementation example, a double implementation of the simplified PDPC techniques of this disclosure may be possible and both implementations may provide the identical results. In one implementation, the simplified PDPC techniques of this disclosure can be considered as a second stage after the conventional intra prediction is derived. That is, the simplified PDPC techniques of this disclosure can be observed as prediction post processing. Such a method can provide a uniform implementation for all intra-modes, but may require an additional stage in intra-prediction. In another implementation, the simplified PDPC techniques of this disclosure can be implemented as a modified intra prediction mode, that is, another intra prediction mode other than conventional intra prediction. In such a method, the modification may be a specific intra prediction mode, but may not require an additional intra prediction stage. The simplified PDPC techniques in this release can have this duality in implementation, which can provide an additional advantage and an implementer can choose the most appropriate approach to the codec design. [0093] [0093] For intra prediction, for each intrasample of prediction (for example, for each respective sample of the predictor block or a subset of it), after the intrasample of prediction is generated (that is, after a primary value for the respective sample to be determined), the prediction sample value can be further adjusted by a weighted sum of a left reference sample, an upper reference sample, an upper left reference sample, and its original prediction sample value ( that is, the primary value for the respective sample). The weight of the left reference sample is denoted wL, the weight of the upper reference sample is denoted wT, and the weight of the upper left reference sample is denoted wTL. In some examples, a video encoder can derive wTL by a weighted sum of wL and wT (for example, wTL = a • wL + b.wT). Thus, in some such examples, the video encoder can determine a value of the third weight (wTL) for a sample of the predictor block as a sum of a first parameter (a) multiplied by the value of the first weight (wL) for the respective sample plus a second parameter (b) multiplied by the value of the second weight (wT) for the respective sample. In some examples, wL and wT are initialized to 0.5. In some examples, each of the first parameter and the second parameter is between 0 and 1, exclusive. [0094] [0094] The values of the first parameter (a) and the second parameter (b) can be determined in one or more ways. For example, the values of a and b can be dependent on the mode. For example, in an example where the values of a and b are dependent on the mode, the values of a and b can be dependent in different directions within prediction. Thus, in this example, different values of a and b can be applied for different intra prediction directions. For example, for non-directional intra prediction modes (for example, planar mode and [0095] [0095] Furthermore, in other examples where the values of a and b are dependent on the mode, for directional intra prediction modes (for example, horizontal and vertical), the first parameter (a) and the second parameter (b) are derived with based on the difference in the intra prediction angle or the difference in the intra mode index in relation to the horizontal and vertical prediction direction. For example, in an example, for horizontal prediction, a is equal to 0 and b is equal to 1; for vertical prediction, a is equal to 1 and b is equal to 0. In this example, for vertical type prediction angles, the angle difference from the vertical prediction is measured to calculate a and b. In addition, in this example, for horizontal type prediction angles, the angle difference from the horizontal prediction is measured to calculate a and b. [0096] [0096] In another example of how the values of the first parameter (a) and the second parameter (b) can be determined, a and b are two constant values, regardless of the position within the predictor block. In this example, typical values of a and b include: 1/16, 1 1 1/32, 1/8, / 4, / 2, 1 and 0. In another example, a and b are signaled from video encoder 20 to decoder of video 30 at a sequence level, an image level, a slice level, a block level, or another level. [0097] [0097] In some examples, a video encoder can derive the value of the first weight (wT) based on a scaled vertical y coordinate of the prediction sample, a block size, and an intra prediction mode. For example, y can be divided by 2, 4, or 8 to derive the value of the first weight (wT). How weight changes can depend on the size of the block (for example, the coordinate divider). For small blocks, the weight drop can be faster than for larger blocks. The video encoder can derive the value of the second weight (wL) based on a scaled x horizontal coordinate of the prediction sample, the block size and the intra prediction mode. In this example, the block size can refer to one of: min (log2 (width), log2 (height)), max (log2 (width), log2 (height)), (10g2 (width) + 10g2 (height)) / 2, log2 (width) + log2 (height), log2 (width), or log2 (height). In this example, for non-directional intra prediction modes such as Planar and DC modes, for the same coordinate (x, y), the same values of wT and wL can be used. In other words, in this example, wT = wL = m for each predictor sample in the predictor block, but there may be different values of m for different predictor samples in the predictor block. In addition, scaling in the vertical and horizontal x and y coordinates can be predefined. In some examples, typical values of the design value include 2, 4 and 8. [0098] [0098] In one example, the techniques disclosed here (weightings above the left, top and top left reference samples) are applied for Planar mode, DC mode, horizontal mode and vertical mode. [0099] [0099] In some examples, the video encoder may apply the simplified PDPC techniques of this disclosure only when specific conditions apply. For example, when conditions do not apply, the video encoder can determine secondary values of samples in a predictor block using conventional intra prediction. In one example, the video encoder can apply the simplified PDPC techniques of this disclosure to LM mode or enhanced LM mode (Chroma component), as described in Zhang et al., “Cross-component enhanced linear model intradiction”, “Joint video exploration” ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11 Team (JVET), 4th Meeting, Chengdu, CN, October 15-21, 2016, document No. JVET -D0110. In this example, the video encoder does not apply the simplified PDPC techniques of this disclosure to a CU predictor block when LM mode or enhanced LM mode is not applied to CU. In some instances, the simplified PDPC techniques in this disclosure are applied to certain block sizes, [0100] [0100] In addition, in another example, the simplified PDPC techniques of this disclosure are applied only to selected prediction sample positions within a prediction block, rather than to the entire prediction block. In some examples, the selected prediction sample positions are in a predefined number of columns, starting at the left and / or a predefined number of rows, starting at the top. For example, a video encoder may apply the simplified PDPC techniques of this disclosure to even-numbered rows or columns of the prediction block, but not to odd-numbered rows or columns of the prediction block. In some examples, the number of columns / rows of prediction samples to which the simplified PDPC techniques of this disclosure are applied may depend on encoded information, including, but not limited to, block size, block height and / or width and / or intra prediction mode. For example, the video encoder can apply the simplified PDPC techniques of this disclosure to each line of the prediction block if the size of the prediction block is less than a limit size (for example, 16 x 16, 8 x 8, etc.). ) and you can apply the simplified PDPC Techniques of this disclosure to all other rows / columns of the prediction block if the prediction block size is greater than or equal to the limit size. [0101] [0101] The clipping operations on the prediction samples after the weighting operations can be removed. The clipping can be used to ensure that the predictor value is within a certain range, usually related to the depth of the input bit. If, after deriving the predictor, the pixel is exceeding the range, the video encoder can cut its value to the minimum or maximum range. However, according to the techniques of this disclosure, these cutting operations on prediction samples may be unnecessary. [0102] [0102] The left, top and top left reference samples used to adjust the prediction sample can be filtered / smoothed reference samples or unfiltered reference samples, and the selection depends on the intra-prediction mode. In one example, for the CC, Horizontal and Vertical intra prediction directions, a video encoder can use the top, left and top left unfiltered reference samples in calculating the weighted sum; for Planar mode, reference samples filtered / smoothed on the left, upper and upper left reference are used in the calculation. [0103] [0103] As described above, in previous versions of PDPC, a video encoder can apply 5-lead or 7-lead filters to the reference samples. However, according to some examples in this disclosure, instead of applying a longer bypass filter (for example, a 5-lead or a 7-lead filter) to the reference samples, the video encoder can apply only one short tap (for example, a 3 tap filter) and the longer tap filter can be replaced by several short tap filters cascading in a predefined order. Such disclosure may use the phrase “smoothed reference samples” to refer to samples resulting from the application of a filter, or cascade of filters, to a set of reference samples. Different sets of smoothed reference samples can be applied in intra prediction, for example, PDPC. In other words, the video encoder can generate different sets of smoothed reference samples when performing the simplified PDPC techniques of this disclosure. In some examples, the video encoder can generate the different sets of smoothed reference samples by applying different cascades of short bypass filters. Different cascades of short bypass filters can have different numbers of short bypass filters and / or different orders of short cascade bypass filters. The video encoder can select a set of smoothed reference samples in which to perform the simplified PDPC techniques of this disclosure. The selection of which set of smoothed reference samples may depend on the block size, for example, block area size, block height and / or width, and / or intra prediction mode. For example, the video encoder can select a first set of smoothed reference samples (for example, a set of reference samples generated with a first filter or filter set) when a prediction block size is less than a size limit (for example, 16 x 16, 8 x 8, etc.) and you can select a second set of smoothed reference samples when the prediction block size is greater than or equal to the limit size. [0104] [0104] In an example where the video encoder applies a cascade of short bypass filters to the reference samples, the cascade of short bypass filters includes a 3-lead filter {2, 4, 2} and a 3-lead filter {4, 0, 4}. In this example, the video encoder generates a first set of smoothed reference samples using the 3-lead filter {2, 4, 2} and generates a second set of smoothed reference samples while still applying the 3-lead filter {4 , 0, 4} at the top of the first set of smoothed reference samples. In other words, the video encoder can generate the second set of smoothed reference samples cascaded into the filter using the 3-lead filter {2, 4, 2} and the 3-lead filter {4, 0, 4}. In another example, the two short bypass filters include a {2, 4, 2} filter and a {3, 2, 3} filter, and the same process described above can be applied. In another example, instead of cascading two 3-lead filters, the video encoder can directly apply a 5-lead filter {1, 2, 2, 2, 1} to the reference samples. [0105] [0105] In some examples, the video encoder applies reference sample filtering with any of the intra prediction modes. In other examples, the video encoder applies reference sample filtering for some intra prediction modes, but not others. For example, in one example, when applying the simplified PDPC techniques of this disclosure, the video encoder only applies the reference sample filtering in Planar intra prediction mode, not in DC, horizontal, or vertical intra prediction modes. [0106] [0106] In some examples, the video encoder may apply different filters (for example, different numbers of filter leads) to the reference samples, and the selection of the filters may depend on the reference sample location. For example, in one example, the video encoder can apply different filters to a reference sample depending on whether the reference sample is located in a limit position (lower-left, upper-right, upper-left) of all samples references available or not. In some examples, which set of filters is applied is signaled by the video encoder 20 to the video decoder 30. In such examples, the video encoder 20 can signal the filter set at various locations within the bit stream, as in a sequence level, an image level, a slice level, or a block level. [0107] [0107] In the original PDPC design, unfiltered and filtered versions of the reference samples are used in the prediction derivation. However, there may be another smoothing filter used for intra-smoothing reference, such as intra mode-dependent smoothing (MDIS) used in HEVC. To further unify the prediction process, PDPC filters such as the 3/5 lead filter used to generate the filtered version (filter length may depend on block size and / or intra direction) can be replaced in simplified PDPC of this disclosure to use exactly the same smoothing filtering (MDIS, for example) that can be applied for non-PDPC modes. In this way, it may not be necessary to maintain a specific PDPC filtering process and implementation of PDPC can be further simplified. [0108] [0108] The following part of this disclosure describes an example of a codec specification text to implement an example of techniques for this disclosure. Some operations that reflect the techniques of this disclosure are emphasized with “<highlight> ... </highlight>“ indicators. [0109] [0109] The following section of codec specification text entitled “Neighbor sample filtering process” describes a filtering process for neighboring reference samples used for intra prediction. Filtering process of neighboring samples Entries for this process are: - neighboring samples p [x] [y], with x = —1, y = - [0110] [0110] As noted above, a video encoder can generate several sets of smoothed reference samples and then select one of the sets of smoothed reference samples for use in the simplified PDPC techniques of this disclosure. The following section of codec specification text entitled “Generating multiple sets of intra prediction reference samples” describes an example of how to generate the various sets of smoothed reference samples. Generation of multiple sets of intra prediction reference samples Entries for this process are: - unfiltered neighbor samples unfiltRef [x] [y], with x = —1, y = —1 .. (nTbWidth + nTbHeight) - 1 ex = [0] [0] [x] [y]. [0] [0] [x] [y] and the size of the transformation block nTbWidth and nTbHeight as inputs, and the output is assigned to the sample matrix filtRef [1] [x] [y]. [0111] [0111] The following section of codec specification text entitled “INTRA_PLANAR intra prediction mode specification” describes a process for determining samples from a predictor block using the simplified PDPC techniques of this disclosure with the Planar intra prediction mode. Specification of intra prediction mode INTRA_PLANAR Entries for this process are: - unfiltered neighbor samples unfiltRef [x] [y], with x = —1, y = —1..nTbHeight * 2 - 1 and x = [0112] [0112] The following section of codec specification text entitled “INTRA_DC In Prediction Mode Specification” describes a process for determining samples from a predictor block using the simplified PDPC techniques of this disclosure with the DC In Prediction Mode. [0113] [0113] The following section of codec specification text entitled “INTRA_HOR intra prediction mode specification” describes a process for determining samples from a predictor block using the simplified PDPC techniques of this disclosure with the horizontal intra prediction mode. Intra-prediction mode specification INTRA_HOR Entries for this process are: - unfiltered neighbor samples unfiltRef [x] [y], with x = —1, y = —1,0, ..., nTbHeight * 2 - 1, ex = 0, 1, nTbWidth * 2 - 1, y = —1, - a variable nTbWidth and nTbHeight that specifies the width and height of the prediction block. [0114] [0114] The following section of codec specification text entitled “INTRA_VER intra prediction mode specification” describes a process for determining samples from a predictor block using the simplified PDPC techniques of this disclosure with the vertical intra prediction mode. Intra-prediction mode specification INTRA_VER Entries for this process are: - unfiltered neighbor samples unfiltRef [x] [y], with x = —1, y = —1..nTbHeight * 2 - 1 and x = 0 .. nTbWidth * 2 - 1, y = —1, [0115] [0115] Figure 8 is a block diagram illustrating an example of video encoder 20 that can implement the techniques of this disclosure. Figure 8 is provided for explanatory purposes and should not be considered limiting to the techniques as widely exemplified and described in this disclosure. The techniques of this disclosure may be applicable to various standards or coding methods. [0116] [0116] The processing circuitry includes video encoder 20, and video encoder 20 is configured to perform one or more of the example techniques described in this disclosure. For example, video encoder 20 includes a set of integrated circuits, and the various units illustrated in Figure 8 can be formed as hardware circuit blocks that are interconnected with a circuit bus. These hardware circuit blocks can be separate circuit blocks, or two or more of the units can be combined into a common hardware circuit block. Hardware circuit blocks can be formed as a combination of electronic components that form operating blocks such as arithmetic logic units (ALUs), elementary function units (EFUs), as well as logic blocks such as AND, OR, NAND, NOR, XOR , XNOR, and other similar logic blocks. [0117] [0117] In some examples, one or more of the units illustrated in Figure 8 are software units running on the processing circuitry. In such examples, the object code for these software units is stored in memory. An operating system can cause the video encoder 20 to retrieve the object code and execute the object code, which causes the video encoder 20 to perform the operations to implement the techniques of the example. In some instances, the software units may be firmware that the video encoder 20 runs at startup. Consequently, video encoder 20 is a structural component having hardware that performs the techniques of the example or has software / firmware running on the hardware to specialize the hardware to perform the techniques of the example. [0118] [0118] In the example of Figure 8, the video encoder 20 includes a prediction processing unit 100, video data memory 101, a residual generation unit 102, a transformation processing unit 104, a quantization unit 106 , a reverse quantization unit 108, a reverse transformation processing unit 110, a reconstruction unit 112, a filter unit 114, a decoded image buffer 116, and an entropy coding unit 118. The prediction 100 includes an inter prediction processing unit 120 and an intra prediction processing unit 126. Inter prediction processing unit 120 may include a motion estimation unit and a motion compensation unit (not shown). [0119] [0119] The video data memory 101 can be configured to store video data to be encoded by the video encoder components 20. The video data stored in the video data memory 101 can be obtained, for example, from of the video source 18. The decoded image buffer 116 may be a reference image memory that stores reference video data for use in video data encoded by video encoder 20, for example, in intra- or inter-mode -codification. Video data memory 101 and decoded image buffer 116 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magneto resistive RAM (MRAM) , Resistive RAM (RRAM), or other types of memory devices. Video data memory 101 and decoded image buffer 116 can be provided by the same memory device or separate memory devices. In several examples, the video data memory 101 can be on the chip with other video encoder components 20, or off the chip with respect to those components. The video data memory 101 can be the same as or part of the storage media 19 of Figure 1. [0120] [0120] Video encoder 20 receives video data. The video encoder 20 can encode each CTU into a slice of an image of the video data. Each of the CTUs can be associated with luma tree blocks of equal size (CTBs) and corresponding CTBs of the image. As part of the coding of a CTU, prediction processing unit 100 can perform partitioning to divide CTU CTBs into progressively smaller blocks. The smallest blocks can be CU coding blocks. For example, the prediction processing unit 100 can partition a CTB associated with a CTU according to a tree structure. [0121] [0121] Video encoder 20 can encode CUs from a CTU to generate encoded representations of the CUs (i.e., encoded CUs). As part of the encoding of a CU, prediction processing unit 100 can partition the coding blocks associated with the CU between one or more PUs of the CU. Thus, each PU can be associated with a luma prediction block and corresponding chroma prediction blocks. The video encoder [0122] [0122] Inter-prediction processing unit 120 can generate predictive data for a PU. As part of generating the predictive data for a PU, inter-prediction processing unit 120 performs inter-prediction in the PU. Predictive data for PU can include predictive blocks for PU and movement information for PU. Inter-prediction processing unit 120 can perform different operations for a PU from a CU depending on whether the PU is in a slice I, a slice P, or a slice B. In a slice I, all PUs are intra predictive. Consequently, if the PU is in a slice I, the inter-prediction processing unit 120 will not perform inter-prediction in the PU. Thus, for blocks encoded in mode I, the predictive block can be formed using spatial prediction of neighboring blocks previously encoded within the same frame. If a PU is in a P slice, the inter-prediction processing unit 120 can use unidirectional inter-prediction to generate a PU predictor block. If a PU is in a B slice, the inter-prediction processing unit 120 can use one-way or two-way inter-prediction to generate a PU predictor block. [0123] [0123] The intra prediction processing unit 126 can generate predictive data for a PU by performing intra prediction on the PU. Predictive data for PU can include PU predictive blocks and various syntax elements. The intra prediction processing unit 126 can perform intra prediction on PUs in slices I, slices P, and slices B. [0124] [0124] To perform intra prediction on a PU, the intra prediction processing unit 126 can use various intra prediction modes to generate multiple predictive data sets for the PU. The intra prediction processing unit 126 can use reconstructed samples from sample blocks of neighboring PUs to generate a predictor block for a PU. Neighboring PUs can be up, up and to the right, up and to the left, or to the left of the PU, assuming a left to right, top to bottom coding order for PUs, CUs, and CTUs. The intra prediction processing unit 126 can use various numbers of intra prediction modes, for example, 33 directional intra prediction modes. In some instances, the number of intra prediction modes may depend on the size of the region associated with the PU. The intra prediction processing unit 126 can perform intra prediction techniques of this disclosure. [0125] [0125] The prediction processing unit 100 can select the predictive data for PUs from a CU from the predictive data generated by the inter prediction processing unit 120 for the PUs or the predictive data generated by the intra prediction processing unit 126 for the PUs. In some examples, the prediction processing unit 100 selects the predictive data for CU PUs based on the rate / distortion metrics of the predictive data sets. The predictive blocks of the selected predictive data can be referred to here as the selected predictor blocks. [0126] [0126] The intra prediction processing unit 126 can generate a predictor block using an intra prediction mode according to any of the techniques of this disclosure. For example, as part of the generation of the predictor block, the intra prediction processing unit 126 can determine an initial value of a first weight and determine an initial value of a second weight. In addition, for each respective sample in a set of samples in the predictor block, the intra prediction processing unit 126 can determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample. The intra prediction processing unit 126 can also determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample. [0127] [0127] Residual generation unit 102 can generate, based on the coding blocks (for example, luma coding blocks, Cb and Cr) for a CU and the selected predictor blocks (for example, luma blocks, Cb and Predictive Cr) for CU PUs, residual blocks (for example, residual luma blocks, Cb and Cr) for CU. For example, the residual generating unit 102 can generate the residual CU blocks so that each sample in the residual blocks has a value equal to a difference between a sample in a CU coding block and a corresponding sample in a selected predictor block correspondence of a CU PU. [0128] [0128] Transformation processing unit 104 can partition residual CU blocks into CU TU transformation blocks. For example, transformation processing unit 104 can perform quad-tree partitioning to partition CU residual blocks into CU TU transformation blocks. Thus, a TU can be associated with a luma transformation block and two chroma transformation blocks. The sizes and positions of the CU's TU chroma and luma transformation blocks may or may not be based on the CU's prediction block sizes and positions. A quad-tree structure known as a “residual quad-tree” (RQT) can include nodes associated with each of the regions. The CU's TUs can correspond to RQT leaf nodes. [0129] [0129] Transformation processing unit 104 can generate transformation coefficient blocks for each TU of a CU by applying one or more transformations to the transformation blocks of the TU. Transformation processing unit 104 can apply multiple transformations to a transformation block associated with a TU. For example, transformation processing unit 104 can apply a discrete cosine transform (DCT), a directional transform, or a transformation conceptually similar to a transformation block. In some examples, transformation processing unit 104 does not apply transformations to a transformation block. In such examples, the transformation block can be treated as a transformation block coefficient. [0130] [0130] The quantization unit 106 can quantify the transformation coefficients in a coefficient block. The quantization unit 106 can quantify a coefficient block associated with a CU's TU based on a quantization parameter (QP) value associated with the CU. The video encoder 20 can adjust the degree of quantization applied to the coefficient blocks associated with a CU by adjusting the QP value associated with the CU. Quantization can introduce loss of information. Thus, the quantized transformation coefficients may be less accurate than the original ones. [0131] [0131] Reverse quantization unit 108 and reverse transformation processing unit 110 can apply reverse quantization and inverse transformations to a coefficient block, respectively, to reconstruct a residual block from the coefficient block. The reconstruction unit 112 can add the reconstructed residual block to corresponding samples of one or more predictor blocks generated by the prediction processing unit 100 to produce a reconstructed transformation block associated with a TU. By reconstructing transformation blocks for each CU of a CU, in this way, the video encoder 20 can reconstruct the CU encoding blocks. [0132] [0132] Filter unit 114 can perform one or more unlocking operations to reduce blocking artifacts in the coding blocks associated with a CU. The decoded image buffer 116 can store the reconstructed coding blocks after the filter unit 114 performs one or more unlock operations on the reconstructed coding blocks. Inter-prediction processing unit 120 may use a reference image containing the reconstructed coding blocks to perform inter-prediction on PUs of other images. In addition, the intra prediction processing unit 126 can use reconstructed encoding blocks in decoded image buffer [0133] [0133] Entropy coding unit 118 can receive data from other functional components of video encoder 20. For example, entropy coding unit 118 can receive coefficient blocks from quantization unit 106 and can receive syntax elements from prediction processing unit 100. Entropy coding unit 118 can perform one or more entropy coding operations on the data to generate entropy encoded data. For example, the entropy coding unit 118 can perform a CABAC operation, a context-adaptive variable-length coding operation (CAVLC), a variable-length to variable coding operation (V2V), a binary arithmetic coding operation adaptive to context (SBAC) based on syntax, a probability interval partitioning (PIPE) entropy coding operation, an Exponential-Golomb coding operation or other type of entropy coding operation on the data. The video encoder 20 can output a bit stream that includes entropy-encoded data generated by the entropy encoding unit 118. For example, the bit stream can include data that represents transformation coefficient values for a UC. [0134] [0134] Figure 9 is a block diagram illustrating an example of video decoder 30 that is configured to implement the techniques of this disclosure. Figure 9 is provided for purposes of explanation and is not limiting in the techniques as widely exemplified and described in this disclosure. For explanatory purposes, this disclosure describes the video decoder 30 in the context of HEVC encoding. However, the techniques in this disclosure may apply to other standards or encoding methods, such as Versatile Video Encoding (VVC). [0135] [0135] The processing circuitry includes the video decoder 30, and the video decoder 30 is configured to perform one or more of the example techniques described in this disclosure. For example, video decoder 30 includes integrated circuitry, and the various units illustrated in Figure 9 can be formed as hardware circuit blocks that are interconnected with a circuit bus. These hardware circuit blocks can be separate circuit blocks, or two or more of the units can be combined into a common hardware circuit block. Hardware circuit blocks can be formed as a combination of electronic components that form operating blocks such as arithmetic logic units (ALUs), elementary function units (EFUs), as well as logic blocks such as AND, OR, NAND, NOR, XOR , XNOR, and other similar logic blocks. [0136] [0136] In some examples, one or more of the units illustrated in Figure 9 may be software units running on the processing circuitry. In such examples, the object code for these software units is stored in memory. An operating system can cause the video decoder 30 to retrieve the object code and execute the object code, which causes the video decoder 30 to perform operations to implement the techniques of the example. In some instances, the software units may be firmware that the video decoder 30 runs at startup. Consequently, the video decoder 30 is a structural component having hardware that performs the techniques of the example or has software / firmware running on the hardware to specialize the hardware to perform the techniques of the example. [0137] [0137] In the example of Figure 9, the video decoder 30 includes an entropy decoding unit 150, video data memory 151, a prediction processing unit 152, an inverse quantization unit 154, a data processing unit reverse transformation 156, a reconstruction unit 158, a filter unit 160, and a decoded image buffer 162. The prediction processing unit 152 includes a motion compensation unit 164 and an intra prediction processing unit 166. In other examples, the video decoder 30 may include more, less, or different functional components. [0138] [0138] Video data memory 151 can store encoded video data, such as a stream of encoded video bits, to be decoded by video decoder components 30. Video data stored in video data memory 151 can be obtained, for example, from a computer-readable medium 16, for example, from a local video source, such as a camera, through wireless and wired video data communication, or by accessing video storage media. physical data. The video data memory 151 can form an encoded image buffer (CPB) that stores encoded video data from an encoded video bit stream. The decoded image buffer 162 can be a reference image memory that stores reference video data for use in video data decoded by the video decoder 30, for example, in intra- or inter-encoding modes, or for output . Video data memory 151 and decoded image buffer 162 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magneto resistive RAM (MRAM) , Resistive RAM (RRAM), or other types of memory devices. Video data memory 151 and decoded image buffer 162 can be provided by the same memory device or separate memory devices. In several examples, the video data memory 151 may be on the chip with other components of the video decoder 30, or off the chip with respect to those components. The video data memory 151 can be the same as or part of the storage media 28 of Figure 1. [0139] [0139] Video data memory 151 receives and stores encoded video data (for example, NAL units) from a bit stream. The entropy decoding unit 150 can receive encoded video data (e.g., NAL units) from the video data memory 151 and can analyze the NAL units for syntax elements. The entropy decoding unit 150 can entropy decode entropy-encoded syntax elements in the NAL units. The prediction processing unit 152, reverse quantization unit 154, reverse transformation processing unit 156, reconstruction unit 158, and filter unit 160 can generate decoded video data based on the syntax elements extracted from the stream bits. The entropy decoding unit 150 can perform a process generally reciprocal to that of the entropy coding unit 118. [0140] [0140] In addition to obtaining syntax elements from the bit stream, the video decoder 30 can perform a reconstruction operation on a non-partitioned CU. To perform the reconstruction operation on a CU, the video decoder 30 can perform a reconstruction operation on each CU of the CU. When performing the reconstruction operation for each CU of the CU, the video decoder 30 can reconstruct residual CU blocks. [0141] [0141] As part of performing a reconstruction operation on a CU's TU, the inverse quantization unit 154 can inversely quantify, that is, decantify, coefficient blocks associated with the TU. After the inverse quantization unit 154 inversely quantizes a coefficient block, the inverse transformation processing unit 156 can apply one or more inverse transformations to the coefficient block in order to generate a residual block associated with the TU. For example, the reverse transformation processing unit 156 can apply an inverse DCT, an inverse integer transformation, an inverse Karhunen-Loeve transformation (KLT), an inverse rotation transformation, an inverse directional transformation, or other inverse transformation to the coefficient block. [0142] [0142] If a PU is encoded using intra prediction, the intra prediction processing unit 166 can perform intra prediction to generate PU predictor blocks. The intra prediction processing unit 166 can use an intra prediction mode to generate the PU predictor blocks based on samples from neighboring blocks spatially. The intra prediction processing unit 166 can determine the intra prediction mode for the PU based on one or more elements of syntax obtained from the bit stream. The intra prediction processing unit 166 can perform intra prediction techniques from this disclosure. [0143] [0143] The intra prediction processing unit 166 can generate a predictor block using an intra prediction mode according to any of the techniques of this disclosure. For example, as part of generating the predictor block, the intra prediction processing unit 166 can determine an initial value of a first weight and determine an initial value of a second weight. In addition, for each respective sample in a set of samples in the predictor block, the intra prediction processing unit 166 can determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample. [0144] [0144] If a PU is coded using inter prediction, the motion compensation unit 164 can determine motion information for the PU. The motion compensation unit 164 can determine, based on the movement information of the PU, one or more reference blocks. The motion compensation unit 164 can generate, based on one or more reference blocks, predictive blocks (for example, predictive luma, Cb and Cr blocks) for the PU. [0145] [0145] Reconstruction unit 158 can use transformation blocks (for example, luma transformation blocks, Cb and Cr) for CU's TUs and the predictor blocks (for example, luma blocks, Cb and Cr) for PUs from CU, that is, intra-prediction data or inter-prediction data, as applicable, to reconstruct the coding blocks (for example, luma, Cb and Cr coding blocks) for the CU. For example, reconstruction unit 158 can add samples from transformation blocks (for example, luma transformation blocks, Cb and Cr) to corresponding samples from the predictor blocks (for example, luma predictive blocks, Cb and Cr) to reconstruct the coding blocks (for example, luma, Cb and Cr coding blocks) of CU. [0146] [0146] Filter unit 160 can perform an unlocking operation to reduce blocking artifacts associated with CU coding blocks. Video decoder 30 can store CU encoding blocks in decoded image buffer 162. Decoded image buffer 162 can provide reference images for subsequent motion compensation, intra prediction and presentation on a display device such as the device display 32 of Figure 1. For example, video decoder 30 can perform, on the basis of the decoded image buffer blocks 162, intra prediction or inter prediction operations for PUs of other CUs. [0147] [0147] Figure 10 is a flow chart illustrating an example of operation of video encoder 20 according to a technique of this disclosure. The flowcharts of this disclosure are provided as examples. In other examples, operations may include more, less, or different actions. In addition, in some instances, operations can be performed in different orders. [0148] [0148] In the example of Figure 10, video encoder 20 (e.g., intra prediction processing unit 126 (Figure 8) of video encoder 20) can generate a predictor block using an intra prediction mode (1000). In some instances, the intra prediction mode is a DC intra prediction mode. In some instances, the intra prediction mode is a horizontal intra prediction mode. In some examples, the intra prediction mode is a vertical intra prediction mode. In some instances, the intra prediction mode may be another type of intra prediction mode. In some examples, each of the samples in the predictor block is a luma sample. In some examples, each of the samples in the predictor block is a chroma sample. In some instances, video encoder 20 can apply the operation of Figure 10 to both luma and chroma samples. [0149] [0149] As part of the generation of the predictor block, the video encoder 20 can determine an initial value of a first weight (1002). In addition, video encoder 20 can determine an initial value of a second weight (1004). In some examples, the initial value of the first weight and the initial value of the second weight can be fixed and predetermined. In some examples, wL and wT are initialized to 0.5. In some examples, the video encoder 20 can determine the initial value of the first weight and the initial value of the second weight based on one or more factors, such as a predictor block size or the inter-prediction mode. [0150] [0150] In addition, for each respective sample in a set of samples in the predictor block, the video encoder 20 can determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample (1006). The first limit can be a limit of the upper predictor block. In some examples, the video encoder 20 can determine the value of the first weight for the respective sample by performing a deviation operation from the initial value of the first weight by an amount based on the distance between the respective sample and the first limit of the block predictor. For example, in the sections of this disclosure entitled “PLANAR intra prediction mode specification”, “DC intra prediction mode specification”, “HOR intra prediction mode specification”, and “VER intra prediction mode specification”, the video encoder 20 can determine the value of the first weight for the respective sample as 32 »((y« 1) »rightShift), where 32 is the initial value of the first weight and y is a distance in samples of an upper predictor block limit . In other examples, the video encoder 20 can determine the value of the first weight for the respective sample by dividing the initial value of the first weight by an amount based on the distance between the respective sample and the first limit of the predictor block. [0151] [0151] The sample set in the predictor block can include each sample in the predictor block. In other examples, the sample set in the predictor block may be a subset of the samples in the predictor block. In other words, in some examples, the simplified PDPC techniques of this disclosure are only applied to selected sample prediction positions instead of being applied to the entire predictor block. For example, the selected sample prediction positions to which the simplified PDPC techniques of this disclosure are applied can be a predefined number of columns in the predictor block starting from the left of the predictor block and / or predefined number of rows in the predictor block starting from a top of the predictor block. [0152] [0152] The video encoder 20 can also determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample (1008). The second limit can be a limit on the left-hand predictor block. In some examples, video encoder 20 can determine the value of the second weight for the respective sample by performing a shift operation from the initial value of the second weight by an amount based on the distance between the respective sample and the second limit of the predictor block . For example, in the sections of this disclosure entitled “PLANAR intra prediction mode specification”, “intra DC prediction mode specification”, “intra HOR prediction mode specification”, and “intra prediction mode specification VER” ”, Video encoder 20 can determine the value of the second weight for the respective sample as 32” ((x «1)» rightShift), where 32 is the initial value of the second weight, [0153] [0153] In some examples, video encoder 20 can determine the value of the first weight for the respective sample based on one or more of: a scaled horizontal coordinate of the current sample, a block size of the predictor block, or the mode of intra prediction. In some examples, video encoder 20 may derive the value of the second weight for the respective sample based on one or more of: a scaled vertical coordinate of the respective sample, the block size of the predictor block, or the intra prediction mode . [0154] [0154] In addition, the video encoder 20 can determine a value of a third weight for the respective sample (1010). The video encoder 20 can determine the value of the third weight for the respective sample in one or more of several ways. For example, video encoder 20 can determine the value of the third weight for the respective sample as (wL »4) + (wT >> 4), where wL is the value of the first weight for the respective sample, wT is the value of the second weight for the respective sample, and >> is the right shift operation. In another example, video encoder 20 can determine the value of the third weight for the respective sample as - (w L >> 4) - (w / 5> 4), where wL is the value of the first weight for the respective sample , wT is the value of the second weight for the respective sample, and »is the right shift operation. [0155] [0155] In some examples, the video encoder 20 determines the value of the third weight for the respective sample based on the intra prediction mode. For example, in an example, as shown in the sections of this disclosure entitled “Specification of the intra PLANAR intra prediction mode” and “Specification of the intra DC intra prediction mode”, the video encoder 20 can determine the value of the third weight for the respective sample as wTL = (wL »4) + (wT >> 4). In this example, as shown in the section of this disclosure entitled “Specification of intra HOR intra intra prediction mode”, video encoder 20 can determine the value of the third weight for the respective sample as wTL = wT. In this example, as shown in the section of this disclosure entitled "Specification of intra VER intra prediction mode", video encoder 20 can determine the value of the fourth weight for the respective sample as wTL = wL. [0156] [0156] In some examples, video encoder 20 can determine the value of the third weight for the respective sample as a sum of a first parameter multiplied by the value of the first weight for the respective sample plus a second parameter multiplied by the value of the second weight for the respective sample. As an example, video encoder 20 can determine the value of the third weight (wTL) as wTL = a • wL + b.wT. The video encoder 20 can determine the values of the first parameter (a) and the second parameter (b) according to any of the examples provided elsewhere in this disclosure. For example, video encoder 20 can determine, based on an intra prediction direction, the values of the first parameter and the second parameter. In some examples, the values of the first parameter and the second parameter are dependent on an intra prediction angle difference or an intra mode index difference with respect to a horizontal and / or vertical prediction direction. [0157] [0157] In some examples, video encoder 20 can signal values of the first parameter and the second parameter in the bit stream. As noted elsewhere in this disclosure, the bit stream may include an encoded representation of the video data. For example, video encoder 20 may include elements of syntax that specify the first parameter and the second parameter in the bit stream. [0158] [0158] In addition, in the example in Figure 10, video encoder 20 can also determine a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample sample, and the value of the third weight for the respective sample (1012). The video encoder 20 can determine the value of the fourth weight for the respective sample in one or more of the various ways. For example, video encoder 20 can determine the value of the fourth weight for the respective sample as being equal to (64 - wT - wL - wTL), where wT is the value of the second weight for the respective sample, wL is the value of the first weight for the respective sample, and wTL is the value of the third weight for the respective sample, and the second value (that is, the offset value on the right) is equal to 6. In some examples, the video encoder 20 can determine the value of the fourth weight for the respective sample as being equal to (2rightShift wT - wL - wTL), where wT is the value of the second weight for the respective sample, wL is the value of the first weight for the respective sample, and wTL is the value of the third weight for the respective sample. [0159] [0159] Additionally, in the example of Figure 10, the video encoder 20 can determine a primary value for the respective sample according to the intra prediction mode (1014). For example, video encoder 20 can determine the primary value of the respective sample using a Planar intra prediction mode, DC intra prediction mode, or directional intra prediction mode, as described elsewhere in this disclosure. [0160] [0160] For example, video encoder 20 can determine the primary value for the respective sample based on reference samples in a current image that contain the predictor block. For example, in this example, video encoder 20 can determine the primary value for the respective sample based on reference samples from the predictor block. Reference samples from the predictor block can include samples decoded in a column to the left of the predictor block and samples decoded in a line above the predictor block. Thus, the reference samples from the predictor block can include a left reference sample for the respective sample that is to the left of the respective sample, a reference sample above for the current sample that is above the respective sample, and a left reference sample. above for the respective sample that is above and to the left of the respective sample. However, depending on the intra prediction mode, determining the primary value for the respective sample value based on the reference samples for the predictor block does not require video encoder 20 to use one or more, or any, of the left reference sample. for the respective sample, the reference sample above for the respective sample, or the left reference sample above for the respective sample. In some examples, video encoder 20 can determine the primary value for the respective sample based on reconstructed samples that are within the predictor block. [0161] [0161] In some examples, video encoder 20 applies one or more filters to the reference samples and determines the primary value for the respective sample based on the filtered reference samples. For example, video encoder 20 can filter reference samples according to any of the examples provided elsewhere in this disclosure. In other examples, video encoder 20 can determine the primary value for the respective sample based on unfiltered reference samples. [0162] [0162] The video encoder 20 can then determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value (1016). The first value for the respective sample can be a sum of: (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, (iii) the value of the third weight for the respective sample multiplied by a left reference sample above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value. For example, video encoder 20 can determine the secondary value for the respective sample according to equation (11), where wT is the first weight, wL is the second weight, wTL is the third weight, and (2rightShift wT— wL —w TL) is the fourth weight, and rightShift is the second value. In some examples, the offset value is equal to 32 and the second value is equal to 6. [0163] [0163] In some examples, video encoder 20 applies one or more filters to the reference samples (including the left reference sample, above reference sample, and left above reference sample) and can determine the secondary value for the respective sample based on filtered reference samples. For example, video encoder 20 can filter reference samples according to any of the examples provided elsewhere in this disclosure. Reference samples can be reference samples from the predictor block. That is, the reference samples can be in a column to the left of the predictor block and in a line above the predictor block. In the examples where the reference samples are reference samples from the predictor block, video encoder 20 can generate the filtered reference samples from the predictor block by applying a filter to begin with, unfiltered reference samples from the predictor block. In other examples, video encoder 20 can use unfiltered values from one or more of the left reference sample, the above reference sample, and the left above reference sample to determine the secondary value for the respective sample. [0164] [0164] In some examples, video encoder 20 only applies the simplified PDPC techniques of this disclosure to certain block sizes, (for example, blocks larger than a threshold). Thus, in some instances, the video encoder 20 can determine the secondary value for the respective sample is the first value shifted to the right by the second value based on a size of the predictor block being greater than a predetermined limit. Thus, in cases where the size of the predictor block is not greater than the limit, the secondary value for the respective sample can be equal to the primary value for the respective sample. [0165] [0165] Furthermore, in the example of Figure 10, video encoder 20 can generate residual data based on the predictor block and an encoding block of the video data (1018). For example, video encoder 20 can generate residual data by subtracting samples from the corresponding predictor block sample encoding block. [0166] [0166] Figure 11 is a flow chart illustrating an example of operation of the video decoder 30 according to a technique of this disclosure. In the example of Figure 11, the video decoder 30 (e.g., intra prediction processing unit 166 (Figure 9) of the video decoder 30) can generate a predictor block using an intra prediction mode (1100). In some instances, the intra prediction mode is a DC intra prediction mode. In some instances, the intra prediction mode is a horizontal intra prediction mode. In some examples, the intra prediction mode is a vertical intra prediction mode. In some instances, the intra prediction mode may be another type of intra prediction mode. In some examples, each of the samples in the predictor block is a luma sample. In some examples, each of the samples in the predictor block is a chroma sample. In some examples, video decoder 30 can apply the operation of Figure 10 to both luma and chroma samples. [0167] [0167] As part of the generation of the predictor block, the video decoder 30 can determine an initial value of a first weight (1102). In addition, the video decoder 30 can determine an initial value of a second weight (1104). The video decoder 30 can determine the initial value of the first weight and the initial value of the second weight according to any of the examples provided in relation to actions (1002) and (1004) and elsewhere in this disclosure. [0168] [0168] In addition, for each respective sample in a set of samples in the predictor block, the video decoder 30 can determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample (1106). The first limit can be a limit of the upper predictor block. The video decoder 30 can also determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample (1108). The second limit can be a limit on the left-hand predictor block. In addition, the video decoder 30 can determine a value of a third weight for the respective sample (1110). The video decoder 30 can also determine a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample. (1112). The video decoder 30 can determine the values of the first, second, third, and fourth weights for the respective sample in the same way as the video encoder 20 as described above in relation to actions (1006), (1008), (1010) and (1012), and described elsewhere in this disclosure. [0169] [0169] Additionally, in the example of Figure 11, video decoder 30 can determine a primary value for the respective sample according to the intra prediction mode (1114). For example, video decoder 30 can determine the primary value in the same way as video encoder 20 as described above in relation to action (1014) and described elsewhere in this disclosure. [0170] [0170] Then, the video decoder 30 can determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value (1116). The first value for the respective sample can be a sum of: (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, (iii) the value of the third weight for the respective sample multiplied by a left reference sample above for the respective sample that is above and to the left of the respective sample, [0171] [0171] In some examples, video decoder 30 applies one or more filters to the reference samples (including the left reference sample, above reference sample, and the left above reference sample) and can determine the secondary value for the respective sample based on the filtered reference samples. For example, the video decoder 30 can filter the reference samples according to any of the examples provided elsewhere in this disclosure. Reference samples can be reference samples from the predictor block. That is, the reference samples can be in a column to the left of the predictor block and in a line above the predictor block. In examples where the reference samples are reference samples from the predictor block, the video decoder 30 can generate the filtered reference samples from the predictor block by applying a filter to begin with, unfiltered reference samples from the predictor block. In other examples, the video decoder 30 may use unfiltered values from one or more of the left reference sample, the above reference sample, and the left above reference sample to determine the secondary value for the respective sample. [0172] [0172] In some examples, the video decoder 30 only applies the simplified PDPC techniques of this disclosure to certain block sizes (for example, blocks larger than a threshold). Thus, in some instances, the video decoder 30 can determine the secondary value for the respective sample is the first value shifted to the right by the second value based on a size of the predictor block being greater than a predetermined limit. Thus, in cases where the size of the predictor block is not greater than the limit, the secondary value for the respective sample can be equal to the primary value for the respective sample. [0173] [0173] In addition, in the example of Figure 11, the video decoder 30 can reconstruct, based on the predictor block and residual data, a decoded block of the video data (1118). For example, video decoder 30 can generate residual data by adding samples from the predictor block to the residual data samples. [0174] [0174] Certain aspects of this disclosure have been described with respect to extensions of the HEVC standard for purposes of illustration. However, the techniques described in this disclosure may be useful for other video encoding processes, including other standard or proprietary video encoding processes not yet developed. [0175] [0175] A video encoder, as described in this disclosure, can refer to a video encoder or video decoder. Likewise, a video encoding unit can refer to a video encoder or decoder. Likewise, video encoding can refer to video encoding or decoding, as applicable. This disclosure may use the term "video unit" or "video block" or "block" to refer to one or more sample blocks and syntax structures used to encode samples from one or more sample blocks. Examples of types of video units can include CTUs, UCs, PUs, transformation units (TUs), macroblocks, macroblock partitions, and so on. In some contexts, the discussion of PUs can be interchanged with the discussion of macroblocks or macroblocks partitions. Examples of types of video blocks can include encoding tree blocks, encoding blocks and other types of video data blocks. [0176] [0176] Techniques can be applied to video encoding in support of any of several multimedia applications, such as broadcast television broadcasts, cable television broadcasts, wired broadcasts, satellite television broadcasts, streaming video broadcasts on the Internet, such as dynamic adaptive streaming over HTTP (DASH), digital video encoded on a data storage medium, decoding digital video stored on a data storage medium or other applications or combinations of the examples above. In some examples, system 10 can be configured to support unidirectional or bidirectional video transmission to support applications such as video streaming, video playback, video transmission and / or video telephony. [0177] [0177] It is necessary to recognize that, depending on the example, certain acts or events of any of the techniques described here can be performed in a different sequence, can be added, merged or deleted completely (for example, not all the described acts or events necessary for the practice of techniques). In addition, in certain examples, acts or events can be performed simultaneously, for example, through multithreaded processing, interrupt processing or multiple processors, instead of sequentially. [0178] [0178] In one or more examples, the functions described can be implemented in hardware, software, firmware or any combination thereof. If implemented in software, functions can be stored or transmitted by, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium, such as data storage media or communication media, including any medium that facilitates the transfer of a computer program from one place to another, by example, according to a communication protocol. In this way, computer-readable media can generally correspond to (1) tangible computer-readable storage media that is not transient or (2) a communication medium such as a signal or carrier wave. The data storage media can be any available media that can be accessed by one or more computers or one or more processing circuits to retrieve instructions, code and / or data structures for implementing the techniques described in this disclosure. A computer program product may include a computer-readable medium. [0179] [0179] By way of example, and not by way of limitation, these computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, memory flash, cache memory, or any other means that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly called a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave , coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies, such as infrared, radio and microwave, are included in the media definition. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals or other transient media, but are directed at tangible and non-transitory storage media. Floppy and disc, as used here, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disc and Blu-ray disc, where discs generally reproduce data magnetically, while discs reproduce optical data with lasers. The above combinations must also be included in the scope of computer-readable media. [0180] [0180] The functionality described in this disclosure can be performed by fixed function and / or set of programmable processing circuits. For example, instructions can be executed by fixed function and / or programmable processing circuitry. Such processing circuit sets may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application-specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or another set equivalent integrated or discrete logic circuits. Consequently, the term "processor", as used herein, can refer to any of the previous structures or to any other structure suitable for the implementation of the techniques described here. In addition, in some respects, the functionality described here can be provided in dedicated hardware and / or software modules configured for encoding and decoding, or incorporated into a combined codec. In addition, the techniques can be fully implemented in one or more circuits or logic elements. The processing circuitry can be coupled to other components in several ways. For example, a set of processing circuitry can be coupled to other components via an internal device interconnect, a wired or wireless network connection, or other means of communication. [0181] [0181] The techniques of this disclosure can be implemented in a wide variety of devices or devices, including a cell phone, an integrated circuit (IC) or a set of ICs (for example, a chipset). Various components, modules or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Instead, as described above, multiple units can be combined into one codec hardware unit or provided by a collection of interoperable hardware units, including one or more processors as described above, in conjunction with appropriate software and / or firmware. [0182] [0182] In this disclosure, ordinal terms such as "first", "second", "third" and so on, are not necessarily indicators of positions within an order, but can simply be used to distinguish different instances of the same thing. [0183] [0183] Several examples have been described. These and other examples are within the scope of the following claims.
权利要求:
Claims (31) [1] 1. Method of decoding video data, the method comprising: generating a predictor block using an intra prediction mode, in which generating the predictor block comprises: determining an initial value of a first weight; determining an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determining a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determining a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and reconstruct, based on the predictor block and residual data, a decoded block of video data. [2] 2. Method according to claim 1, in which, for each respective sample in the sample set in the predictor block, determining the primary value for the respective sample comprises determining the primary value for the respective sample based on reference samples from the predictor block, the reference samples including the left reference sample for the respective sample, the reference sample above for the respective sample, and the left reference sample above for the respective sample. [3] 3. Method, according to claim 2, in which the reference samples of the block are filtered reference samples and the method further comprises: generating the filtered reference samples of the predictor block by applying a filter to the initial reference samples of the predictor block. [4] 4. Method according to claim 1, in which, for each respective sample in the sample set in the predictor block: determining the value of the first weight for the respective sample comprises determining the value of the first weight for the respective sample by carrying out a deviation operation in the initial value of the first weight by an amount based on the distance between the respective sample and the first limit of the predictor block; and determining the value of the second weight comprises determining the value of the second weight for the respective sample by performing the deviation operation on the initial value of the second weight by an amount based on the distance between the respective sample and the second limit of the predictor block. [5] 5. Method, according to claim 1, in which, for each respective sample in the set of samples in the predictor block, the value of the fourth weight for the respective sample is equal to (64 - wT - wL - wTL), where wT is the value of the second weight for the respective sample, wL is the value of the first weight for the respective sample, and wTL is the value of the third weight for the respective sample, the displacement value is equal to 32, and the second value is equal to 6. [6] 6. Method according to claim 1, in which determining the value of the third weight for the respective sample comprises: determining the value of the third weight for the respective sample as (wL »4) + (wT» 4), where wL is the value of the first weight for the respective sample, wT is the value of the second weight for the respective sample, and »is a right shift operation. [7] 7. Method according to claim 1, further comprising, for each respective sample in the set of samples in the predictor block: determining the value of the third weight for the respective sample as a sum of a first parameter multiplied by the value of the first weight for the respective sample plus a second parameter multiplied by the value of the second weight for the respective sample. [8] 8. Method according to claim 7, further comprising: determining, based on an intra prediction direction, the values of the first parameter and the second parameter, obtaining values of the first parameter and the second parameter of a bit stream that comprises a coded representation of the video data, or determining the values of the first parameter and the second parameter based on an intra prediction angle difference or an intra mode index difference with respect to a horizontal prediction direction and / or a vertical. [9] A method according to claim 1, wherein the intra prediction mode is a DC intra prediction mode, a horizontal intra prediction mode, or a vertical intra prediction mode. [10] 10. Method of encoding video data, the method comprising: generating a predictor block using an intra prediction mode, in which generating the predictor block comprises: determining an initial value of a first weight; determining an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determining a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determining a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and generate residual data based on the predictor block and an encoding block of the video data. [11] 11. Method according to claim 10, in which, for each respective sample in the set of samples in the predictor block: determining the value of the first weight for the respective sample comprises determining the value of the first weight for the respective sample by carrying out a deviation operation in the initial value of the first weight by an amount based on the distance between the respective sample and the first limit of the predictor block; and determining the value of the second weight comprises determining the value of the second weight for the respective sample by performing the deviation operation on the initial value of the second weight by an amount based on the distance between the respective sample and the second limit of the predictor block. [12] 12. The method of claim 10, wherein determining the value of the third weight for the respective sample comprises: determining the value of the third weight for the respective sample as (wL »4) + (wT» 4), where wL is the value of the first weight for the respective sample, wT is the value of the second weight for the respective sample, and »is a right shift operation. [13] 13. Method according to claim 10, further comprising, for each respective sample in the sample set in the predictor block: determining the value of the third weight for the respective sample as a sum of a first parameter multiplied by the value of the first weight for the respective sample plus a second parameter multiplied by the value of the second weight for the respective sample. [14] A method according to claim 10, wherein the intra prediction mode is a DC intra prediction mode, a horizontal intra prediction mode, or a vertical intra prediction mode. [15] 15. Apparatus for decoding video data, the apparatus comprising: one or more storage media configured to store video data; and one or more processors configured to: generate a predictor block using an intra prediction mode, in which the one or more processors are configured so that, as part of the generation of the predictor block, the one or more processors: determine an initial value a first weight; determine an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determine a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and reconstruct, based on the predictor block and residual data, a decoded block of video data. [16] An apparatus according to claim 15, wherein, for each respective sample in the sample set in the predictor block, the one or more processors are configured to determine the primary value for the respective sample based on reference samples from the block predictor, the reference samples including the left reference sample for the respective sample, the reference sample above for the respective sample, and the left reference sample above for the respective sample. [17] 17. Apparatus according to claim 16, in which the block reference samples are filtered reference samples and the one or more processors are configured to generate the filtered reference samples from the predictor block by applying a filter to the sample samples. initial reference blocks of the predictor block. [18] 18. Apparatus according to claim 15, in which, for each respective sample in the sample set in the predictor block, the one or more processors are configured to: determine the value of the first weight for the respective sample by performing an operation deviation from the initial value of the first weight by an amount based on the distance between the respective sample and the first limit of the predictor block; and determining the value of the second weight for the respective sample by performing the operation of deviation from the initial value of the second weight by an amount based on the distance between the respective sample and the second limit of the predictor block. [19] 19. Apparatus, according to claim 15, in which, for each respective sample in the set of samples in the predictor block, the value of the fourth weight for the respective sample is equal to (64 - wT - wL - wTL), where wT is the value of the second weight for the respective sample, wL is the value of the first weight for the respective sample, and wTL is the value of the third weight for the respective sample, the displacement value is equal to 32, and the second value is equal to 6. [20] 20. Apparatus according to claim 15, wherein the one or more processors are configured to determine the value of the third weight for the respective sample as (wL »4) + (wT >> 4), where wL is the value of the first weight for the respective sample, wT is the value of the second weight for the respective sample, and »is a right shift operation. [21] 21. Apparatus according to claim 15, wherein, for each respective sample in the sample set in the predictor block, the one or more processors are configured to determine the value of the third weight for the respective sample as a sum of a first parameter multiplied by the value of the first weight for the respective sample plus a second parameter multiplied by the value of the second weight for the respective sample. [22] 22. Apparatus according to claim 21, wherein the one or more processors are further configured to: determine, based on an intra prediction direction, the values of the first parameter and the second parameter, obtain values of the first parameter and the second parameter of a bit stream comprising a coded representation of the video data, or determine the values of the first parameter and the second parameter based on an intra prediction angle difference or an intra mode index difference with respect to a horizontal and / or vertical prediction direction. [23] Apparatus according to claim 15, wherein the intra prediction mode is a DC intra prediction mode, a horizontal intra prediction mode, or a vertical intra prediction mode. [24] Apparatus according to claim 23, wherein the apparatus comprises: an integrated circuit, a microprocessor, or a wireless communication device. [25] 25. Apparatus for encoding video data, the apparatus comprising: one or more storage media configured to store video data; and one or more processors configured to: generate a predictor block using an intra prediction mode, in which the one or more processors are configured so that, as part of the generation of the predictor block, the one or more processors: determine an initial value of a first weight; determine an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determine a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determine a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and generate residual data based on the predictor block and an encoding block of the video data. [26] 26. Apparatus according to claim 25, in which, for each respective sample in the set of samples in the predictor block, the one or more processors are configured to: determine the value of the first weight for the respective sample by performing an operation deviation from the initial value of the first weight by an amount based on the distance between the respective sample and the first limit of the predictor block; and determining the value of the second weight for the respective sample by performing the operation of deviation from the initial value of the second weight by an amount based on the distance between the respective sample and the second limit of the predictor block. [27] 27. Apparatus according to claim 25, wherein the one or more processors are configured to determine the value of the third weight for the respective sample as (wL »4) + (wT >> 4), where wL is the value of the first weight for the respective sample, wT is the value of the second weight for the respective sample, and »is a right shift operation. [28] 28. Apparatus according to claim 25, wherein, for each respective sample in the sample set in the predictor block, the one or more processors are configured to determine the value of the third weight for the respective sample as a sum of a first parameter multiplied by the value of the first weight for the respective sample plus a second parameter multiplied by the value of the second weight for the respective sample. [29] 29. Apparatus according to claim 25, wherein the intra prediction mode is a DC intra prediction mode, a horizontal intra prediction mode, or a vertical intra prediction mode. [30] Apparatus according to claim 25, wherein the apparatus comprises: an integrated circuit, a microprocessor, or a wireless communication device. [31] 31. Computer-readable storage medium having instructions stored in it that, when executed, makes one or more processors: generate a predictor block using an intra prediction mode, in which the one or more processors are configured so that, as part of the generation of the predictor block, the one or more processors: determining an initial value of a first weight; determining an initial value of a second weight; for each respective sample in a set of samples in the predictor block: determine, based on the initial value of the first weight and a distance between the respective sample and a first limit of the predictor block, a value of the first weight for the respective sample; determine, based on the initial value of the second weight and a distance between the respective sample and a second limit of the predictor block, a value of the second weight for the respective sample; determine a value of a third weight for the respective sample; determining a value of a fourth weight for the respective sample based on the value of the first weight for the respective sample, the value of the second weight for the respective sample, and the value of the third weight for the respective sample; determine a primary value for the respective sample according to the intra prediction mode; and determining a secondary value for the respective sample as a first value for the respective sample shifted to the right by a second value, the first value for the respective sample being a sum of (i) the value of the first weight for the respective sample, multiplied by a left reference sample for the respective sample that is to the left of the respective sample, (ii) the value of the second weight for the respective sample multiplied by a reference sample above for the respective sample that is above the respective sample, ( iii) the value of the third weight for the respective sample multiplied by a reference sample left above for the respective sample that is above and to the left of the respective sample, (iv) the value of the fourth weight for the respective sample multiplied by the primary value for the respective sample, and (v) a displacement value; and reconstruct, based on the predictor block and residual data, a decoded block of video data.
类似技术:
公开号 | 公开日 | 专利标题 BR112020006568A2|2020-10-06|position dependent prediction combinations in video encoding ES2845673T3|2021-07-27|Fragment level intrablock copy US10200719B2|2019-02-05|Modification of transform coefficients for non-square transform units in video coding US10382781B2|2019-08-13|Interpolation filters for intra prediction in video coding US20200213608A1|2020-07-02|Motion vector generation for affine motion model for video coding TW201841501A|2018-11-16|Multi-type-tree framework for video coding BR112020006875A2|2020-10-06|low complexity project for fruc BR112019013705A2|2020-04-28|temporal prediction of modified adaptive loop filter to support time scalability US11197005B2|2021-12-07|Cross-component prediction for video coding US10728573B2|2020-07-28|Motion compensated boundary pixel padding US11044473B2|2021-06-22|Adaptive loop filtering classification in video coding BR112020019715A2|2021-02-09|combination of extended position-dependent intraprediction with angular modes US20200296370A1|2020-09-17|Implicit transform selection in video coding TW201713116A|2017-04-01|Reference picture list construction in intra block copy mode BR112021000002A2|2021-03-30|NON-ADJACENT MVPS BASED ON HISTORY MULTIPLE FOR VIDEO CODE WAVEFRONT PROCESSING BR112019027071A2|2020-07-07|Improved intra-prediction in video coding BR112021009721A2|2021-08-17|triangular motion information for video encoding BR112021005354A2|2021-06-15|related restrictions for worst-case bandwidth reduction in video encoding BR112021010946A2|2021-08-24|Tree-based Transform Unit | partition for video encoding BR112020025145A2|2021-07-20|unlock filter for subpartition boundaries caused by intra-subpartition encoding tool BR112021011307A2|2021-08-31|METHOD AND EQUIPMENT OF INTER PREDICTION, BITS FLOW AND NON-TRANSITORY STORAGE MEDIA WO2019219066A1|2019-11-21|Coding and decoding methods and devices BR112020026713A2|2021-03-23|INTRA SMOOTHING DEPENDENT COMBINATION MODE | WITH INTRA INTERPOLATION FILTER SWITCH BR112021000640A2|2021-04-06|ROUNDING OF MOTION VECTORS FOR ADAPTIVE MOTION VECTOR DIFFERENCE RESOLUTION AND INCREASED MOTION VECTOR STORAGE ACCURACY IN VIDEO ENCODING BR112021004124A2|2021-05-25|video decoding method and video decoder
同族专利:
公开号 | 公开日 EP3695606A1|2020-08-19| US20190110045A1|2019-04-11| TW201924346A|2019-06-16| SG11202001959XA|2020-04-29| CN111183645A|2020-05-19| WO2019074905A1|2019-04-18| KR20200057014A|2020-05-25| US10965941B2|2021-03-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP2039171B1|2006-07-07|2016-10-05|Telefonaktiebolaget LM Ericsson |Weighted prediction for video coding| EP2493194A4|2009-10-22|2014-07-16|Univ Zhejiang|Video and image encoding/decoding system based on spatial domain prediction| US9113050B2|2011-01-13|2015-08-18|The Boeing Company|Augmented collaboration system| US9930366B2|2011-01-28|2018-03-27|Qualcomm Incorporated|Pixel level adaptive intra-smoothing| US9288500B2|2011-05-12|2016-03-15|Texas Instruments Incorporated|Luma-based chroma intra-prediction for video coding| US9020294B2|2012-01-18|2015-04-28|Dolby Laboratories Licensing Corporation|Spatiotemporal metrics for rate distortion optimization| US9451254B2|2013-07-19|2016-09-20|Qualcomm Incorporated|Disabling intra prediction filtering| US10129542B2|2013-10-17|2018-11-13|Futurewei Technologies, Inc.|Reference pixel selection and filtering for intra coding of depth map| WO2017043816A1|2015-09-10|2017-03-16|엘지전자|Joint inter-intra prediction mode-based image processing method and apparatus therefor| US10425648B2|2015-09-29|2019-09-24|Qualcomm Incorporated|Video intra-prediction using position-dependent prediction combination for video coding| EP3367688A4|2015-10-21|2018-10-24|Sharp Kabushiki Kaisha|Predictive image generation device, image decoding device, and image encoding device| US10567759B2|2016-03-21|2020-02-18|Qualcomm Incorporated|Using luma information for chroma prediction with separate luma-chroma framework in video coding| US10230961B2|2016-06-03|2019-03-12|Mediatek Inc.|Method and apparatus for template-based intra prediction in image and video coding| US10674165B2|2016-12-21|2020-06-02|Arris Enterprises Llc|Constrained position dependent intra prediction combination | CN107071417B|2017-04-10|2019-07-02|电子科技大学|A kind of intra-frame prediction method for Video coding| CN108810552B|2017-04-28|2021-11-09|华为技术有限公司|Image prediction method and related product| US20190089952A1|2017-09-19|2019-03-21|Futurewei Technologies, Inc.|Bidirectional Weighted Intra Prediction|CN108293111A|2015-10-16|2018-07-17|Lg电子株式会社|For improving the filtering method and device predicted in image encoding system| US10715818B2|2016-08-04|2020-07-14|Intel Corporation|Techniques for hardware video encoding| US10567752B2|2018-07-02|2020-02-18|Tencent America LLC|Method and apparatus for intra prediction for non-square blocks in video compression| TW202017376A|2018-08-17|2020-05-01|大陸商北京字節跳動網絡技術有限公司|Simplified cross component prediction| GB2591379A|2018-09-12|2021-07-28|Beijing Bytedance Network Tech Co Ltd|Single-line cross component linear model prediction mode| KR20210089132A|2018-11-06|2021-07-15|베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드|Intra prediction based on location| AU2019391197A1|2018-12-07|2021-06-17|Beijing Bytedance Network Technology Co., Ltd.|Context-based intra prediction| CN112236996A|2018-12-21|2021-01-15|株式会社 Xris|Video signal encoding/decoding method and apparatus thereof| EP3903482A1|2019-02-22|2021-11-03|Beijing Bytedance Network Technology Co. Ltd.|Neighbouring sample selection for intra prediction| EP3903493A1|2019-02-24|2021-11-03|Beijing Bytedance Network Technology Co. Ltd.|Parameter derivation for intra prediction| US11025913B2|2019-03-01|2021-06-01|Intel Corporation|Encoding video using palette prediction and intra-block copy| US10855983B2|2019-06-13|2020-12-01|Intel Corporation|Encoding video using two-stage intra search| WO2021134759A1|2020-01-02|2021-07-08|Huawei Technologies Co., Ltd.|An encoder, a decoder and corresponding methods of symmetric mode dependent intra smoothing when wide angle intra prediction is activated|
法律状态:
2021-11-23| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762570019P| true| 2017-10-09|2017-10-09| US62/570,019|2017-10-09| US16/154,261|2018-10-08| US16/154,261|US10965941B2|2017-10-09|2018-10-08|Position-dependent prediction combinations in video coding| PCT/US2018/054979|WO2019074905A1|2017-10-09|2018-10-09|Position-dependent prediction combinations in video coding| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|